According to the online documentation, in order to backup data you must go to the Datastore Admin after you have enabled it. It is enabled on my production server.
The to import you simply go to the admin page again, and select the back name you just created. I have downloaded the backup from my Google Bucket onto a local drive, following all the steps of this handy guide found here http://gbayer.com/big-data/app-engine-datastore-how-to-efficiently-export-your-data/
Now that I have the data locally, I am stuck in trying to get it into a local dev server.
appcfg.py upload_data --application=dev~<APP_ID> --url=http://localhost:8080/_ah/remote_api --filename=<lengthyfilename>.backup_info
04:03 PM Uploading data records.
[INFO ] Logging to bulkloader-log-20161230.160323
[INFO ] Throttling transfers:
[INFO ] Bandwidth: 250000 bytes/second
[INFO ] HTTP connections: 8/second
[INFO ] Entities inserted/fetched/modified: 20/second
[INFO ] Batch Size: 10
[INFO ] Opening database: bulkloader-progress-20161230.160323.sql3
[ERROR ] [Thread-1] RestoreThread:
Traceback (most recent call last):
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\bulkloader.py", line 1555, in run
self.PerformWork()
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\bulkloader.py", line 2838, in PerformWork
cursor.execute('select id, value from result')
DatabaseError: file is encrypted or is not a database
Inside the local admin page, there is not a restore option in any of the Cloud Datastore pages, I'm at a loss. All the answers I find are five years old, on outdated packages. How do you restore an Cloud Datastore Backup into a Developer App Server?
One benefit of using the Datastore emulator is that it now supports import and export.
You can read about starting dev_appserver with the Datastore emulator here.
Related
I have an AWS-Amplify project that had been building without a problem but is now failing.
# Starting phase: build
2021-11-20T00:40:02.506Z [INFO]: [31mFailed to get profile: Profile configuration is missing for: amplify[39m
2021-11-20T00:40:02.564Z [ERROR]: !!! Build failed
2021-11-20T00:40:02.564Z [ERROR]: !!! Non-Zero Exit Code detected
2021-11-20T00:40:02.564Z [INFO]: # Starting environment caching...
2021-11-20T00:40:02.565Z [INFO]: # Environment caching completed
Terminating logging...
The problem seemed to start after I made an error doing a pull request (in the wrong direction!), however, the problem has persisted despite reverting back to an earlier commit.
I have also ensured all the Amplify code is up to date amplify pull, as well as trying amplify configure and amplify init on my development machine.
Other posts that describe problems with 'Profile Configuration' seem to be related to the development machine and setting up the CLI. This failure is happening when I try to build on AWS using continuous deploys, building locally works fine.
so i got it to work.
Just delete the aws-exports.json and the amplify folder.
Then run the command from amplify which is something like:
amplify pull --appId XXXXXXXXXXX --envName dev
After a few mins, it will prompt you to select:
AWS PROFILE
AWS KEYS
select AWS KEYS
enter credentials for a programmatic user and it should be fine
I have problem with deploying node.js (Angular) application from Bitbucket repository to Firebase using Cloud Build Trigger.
I performed steps from this article - https://cloud.google.com/build/docs/deploying-builds/deploy-firebase
Necessary APIs are turned on,
Service account has all required permissions,
Firebase community builder is deployed and visible in Container Registry,
Cloudbuild.json file is added to repository,
Cloud Build Trigger is created and it points to one specific branch of my repository.
Problem is that after running Cloud Build Trigger I receive following error: "Error: Not in Firebase app directory (could not locate firebase.json)"
What could be reason of such error? Is it possible to point to the trigger where in repository is firebase application and firebase.json file?
EDIT:
My cloudbuild.json file is quite simple:
{
"steps": [
{
"name": "gcr.io/PROJECT_NAME/firebase",
"args": [
"deploy",
"--project",
"PROJECT_NAME",
"--only",
"functions"
]
}
]
}
Logs in Cloud Build Trigger history:
starting build "build_id"
FETCHSOURCE
Initialized empty Git repository in /workspace/.git/
From https://source.developers.google.com/p/link
* branch branch_id -> FETCH_HEAD
HEAD is now at d8bc2f1 Add cloudbuild.json file
BUILD
Pulling image: gcr.io/PROJECT/firebase
Using default tag: latest
latest: Pulling PROJECT/firebase
1e987daa2432: Already exists
a0edb687a3da: Already exists
6891892cc2ec: Already exists
684eb726ddc5: Already exists
b0af097f0da6: Already exists
154aee36a7da: Already exists
37e5835696f7: Pulling fs layer
62eb6e670f1d: Pulling fs layer
47e62615d9f9: Pulling fs layer
428cea824ccd: Pulling fs layer
765a2c722bf9: Pulling fs layer
b3f5d0a285e3: Pulling fs layer
428cea824ccd: Waiting
765a2c722bf9: Waiting
b3f5d0a285e3: Waiting
47e62615d9f9: Verifying Checksum
47e62615d9f9: Download complete
62eb6e670f1d: Verifying Checksum
62eb6e670f1d: Download complete
765a2c722bf9: Verifying Checksum
765a2c722bf9: Download complete
b3f5d0a285e3: Verifying Checksum
b3f5d0a285e3: Download complete
37e5835696f7: Verifying Checksum
37e5835696f7: Download complete
428cea824ccd: Verifying Checksum
428cea824ccd: Download complete
37e5835696f7: Pull complete
62eb6e670f1d: Pull complete
47e62615d9f9: Pull complete
428cea824ccd: Pull complete
765a2c722bf9: Pull complete
b3f5d0a285e3: Pull complete
Digest: sha256:4b6b7214d6344c8247130bf3f5443a2851e39aed3ececb32dfb2cc25a5c07e44
Status: Downloaded newer image for gcr.io/PROJECT/firebase:latest
gcr.io/PROJECT/firebase:latest
Error: Not in a Firebase app directory (could not locate firebase.json)
ERROR
ERROR: build step 0 "gcr.io/PROJECT/firebase" failed: step exited with non-zero status: 1
Firebase.json file is placed in repository in following path: /apps/firebase/firebase.json
I can't see any possibility to point that path in Cloud Build Trigger config.
I'm pretty sure that you aren't in the correct directory when you run the command. To check this, you can add this step
{
"steps": [
{
"name": "gcr.io/cloud-builders/gcloud",
"entrypoint":"ls"
"args": ["-la"]
}
]
}
If you aren't in the correct directory, and you need to be in the ./app/firebase to run your deploy, you can add this parameter in your step
{
"steps": [
{
"name": "gcr.io/PROJECT_NAME/firebase",
"dir": "app/firebase",
"args": [
"deploy",
"--project",
"PROJECT_NAME",
"--only",
"functions"
]
}
]
}
Let me know if it's better
In our organization we are running Artifactory Pro edition with daily exports of data to NAS drive (full system export). Every night it is running for around 4 hours and write that "system export was successful". The time has come to migrate our instance to PostgreSQL based (running on derby now). I have read that you need to do it with the full system import.
Few numbers:
Artifacts: almost 1 million
Data size: over 2TB of data
Export data volume: over 5TB of data
If you also were pondering why export data volume is more than 2 times bigger than disk space usage, our guess is that docker images are deduplicated (per layer) when stored in the docker registry, but on export the deduplication is not there.
Also, I had success migrating the instance by rsync'ing the data over to another server and then starting exactly the same setup there. Worked just fine.
When starting exactly the same setup on another machine (clean install) and running system import, it fails with the following log:
[/data/artifactory/logs/artifactory.log] - "errors" : [ {
[/data/artifactory/logs/artifactory.log] - "code" : "INTERNAL_SERVER_ERROR",
[/data/artifactory/logs/artifactory.log] - "message" : "Unable to import access server",
[/data/artifactory/logs/artifactory.log] - "detail" : "File '/root/.jfrog-access/etc/access.bootstrap.json' does not exist"
[/data/artifactory/logs/artifactory.log] - } ]
[/data/artifactory/logs/artifactory.log] - }
Full log is here: https://pastebin.com/ANZBiwHC
The /root/.jfrog-access directory is Access home directory (Access uses derby as well).
What am I missing here?
There are couple of things we were doing wrong according to the Artifactory documentation:
Export is not a proper way to backup big instance. In case of running Artifactory with derby, it is sufficient to rsync filestore and derby directories to NAS.
Incremental export over several versions of Artifactory is NOT supported. Meaning, that if you had full export on version 4.x.x, then you upgraded over to version 5.x.x, then to version 6.x.x and you had incremental exports along the way... Then your export will NOT be imported into version 6.x.x. After each version upgrade it is necessary to create new full export of the instance.
I resolved the situation by removing the export and doing full system export (around 30 hours). Full system export was successfully imported on another instance (around 12 hours).
P.S. The error is still cryptic to me.
I'm familiar with firebase and fire storage, but new to G-Cloud-SDK. I found this topic https://groups.google.com/forum/#!msg/firebase-talk/oSPWMS7MSNA/RnvU6aqtFwAJ and tried to copy the results, but get "IOError: [Errno 2] No such file or directory: u'cors-json-file.json'" from the terminal.
I found a bit more information in the docs https://cloud.google.com/storage/docs/configuring-cors#configure-cors-gsutil and it describes the same process, and I get the same error.
I'm finding it hard to do any troubleshooting with Google Cloud SDK since I have already setup functions in Firebase.
gsutil cors set cors.json gs://example-bucket
Example: If the cors.json file in the d: drive then the path will be like
d:/folderName/cors_filename. The gs url is your bucket url.
I did not move my local directory to my 'cors-json-file.json' ...
I am coding a Java project and I'm automating the build and the publishing to JFrog Artifactory using SBT.
Whenever it's time to publish to Artifactory I want to do it using the Ivy directory layout and obviously publish the Ivy XML file along with the jar. I managed to achieve this by using the following lines in the build.sbt file:
crossPaths := false
publishTo := Some("Artifactory Realm" at "http://<Artifactory IP>:<Artifactory Port>/artifactory/org.project.my")
credentials += Credentials(Path.userHome / ".ivy2" / ".credentials")
publishMavenStyle := false
However it only works when anonymous users are allowed to deploy into Artifactory. I realized that sbt is not really passing my credentials to Artifactory but, instead, logging in as anonymous.
My $HOME/.ivy2/.credentials file looks like this:
realm=Artifactory Realm
host=http://<Artifactory IP>:<Artifactory Port>/artifactory/org.project.my
user=<my user name>
password=<my user name>
However, if I change the Artifactory configuration in order to prevent anonymous users from deploying new Artifacts, when I run "sbt publish" I get the following output:
[error] Unable to find credentials for [Artifactory Realm # <Artifactory IP>].
java.io.IOException: Access to URL http://<Artifactory IP>:<Artifactory Port>/artifactory//org.project.my/org/project/my/project-my/1.0.0/project-my-1.0.0.jar was refused by the server: Unauthorized
The Artifactory request.log file then contains:
20160219011657|319|REQUEST|10.0.2.2|anonymous|PUT|/org.project.my/org/project/my/project-my/1.0.0/project-my-1.0.0.jar|HTTP/1.1|401|24978
I have also tried passing the credentials manually instead of using a file:
credentials += Credentials("Artifactory Realm", "localhost", "<USERNAME>", "<PASS>")
But I am getting the same result.
Any idea what I might be missing?
try:
host=<Artifactory IP>
old answer (doesn't work):
host=<Artifactory IP>:<Artifactory port>
I had a different problem: I had the wrong realm set on my .credentials file.
Looking at the error output from sbt, I was able to figure out that I should use:
realm=Artifactory Realm
Error shows the expected values for realm and host:
[error] Unable to find credentials for [Artifactory Realm # myhost].