Since 2 days ago, we have been getting "unknown blob" errors when pulling from jfrog. I am attaching a sample log:
Command ['ssh', '-o', 'StrictHostKeyChecking=no', '-o', 'LogLevel=ERROR', 'localhost', 'docker', 'pull', '<redacted>.jfrog.io/<redacted>:latest'] failed with exit code 1 and output 'latest: Pulling from <redacted>
f5d23c7fed46: Pulling fs layer
3f4aa1d1dde5: Pulling fs layer
52c4bf0b6229: Pulling fs layer
fe61f8f5a308: Pulling fs layer
ebeed9e8b27e: Pulling fs layer
89831686aa31: Pulling fs layer
2e2c5baec652: Pulling fs layer
b6fa760c79e4: Pulling fs layer
2e2c5baec652: Waiting
ebeed9e8b27e: Waiting
b6fa760c79e4: Waiting
fe61f8f5a308: Waiting
3f4aa1d1dde5: Verifying Checksum
3f4aa1d1dde5: Download complete
f5d23c7fed46: Verifying Checksum
f5d23c7fed46: Download complete
fe61f8f5a308: Download complete
ebeed9e8b27e: Download complete
89831686aa31: Download complete
f5d23c7fed46: Pull complete
3f4aa1d1dde5: Pull complete
2e2c5baec652: Verifying Checksum
2e2c5baec652: Download complete
b6fa760c79e4: Downloading
unknown blob
This seems to have started during the kinesis outage. We first noticed it while we were trying to deploy a workaround during the outage. However the problem still persists.
The image pulls fine from docker hub, so it's not corrupted. This is currently breaking out automated deploy/provisioning process, as we have manually pull failed imaged from dockerhub.
Thanks,
-Caius
With #John's suggestion, I zapped the cache on JFrog side, and that removed the issue.
It seems that it's stale/invalid cache issue.
Also, while looking at the JFrog logs, I did find this, which might be relavant:
2020-11-28T18:55:24.493Z [jfrt ] [ERROR] [b66d3ae308977fb1] [o.a.r.RemoteRepoBase:858 ] [ttp-nio-8081-exec-17] - IO error while trying to download resource '<redacted>: org.artifactory.request.RemoteRequestException: Error fetching <redacted>/blobs/sha256:9c11dabbdc3a450cd1d9e15b016d455250606d78eecb33c92eebfa657549787f (remote response: 429: Too Many Requests)
TL;DR: zapping the cache fixed the problem.
Related
I am currently trying to run a ROS2 (Galactic)-YOLOv5 wrapper on a ROS2 bag recorded on Galactic. I know the wrapper works when it's subscribing to my webcam, however I want it to subscribe to a bag I recorded. My process to play the bag seems pretty standard... I open a terminal, source my Galactic environment, colcon build, source install, and try to run my bag:
$ source /opt/ros/galactic/setup.bash
$ colcon build
$ source install/setup.bash
$ ros2 bag play boson_black/
but when I hit enter I get this error:
[ERROR] [1658428463.099175201] [rosbag2_storage]: Could not open 'black_boson/' with 'sqlite3'. Error: Failed to setup storage. Error: Could not read-only open database. SQLite error (10): disk I/O error
[ERROR] [1658428463.099218921] [rosbag2_storage]: Could not load/open plugin with storage id 'sqlite3'.
No storage could be initialized. Abort
On the ROS2 bag github someone had a similar issue which was due to not having the rosbag2 sqlite3 storage plugin installed which is provided by the package rosbag2_storage_default_plugins. Assuming this is the issue I:
$ sudo apt-get install ros-galactic-rosbag2-storage-default-plugins
Reading package lists... Done
Building dependency tree
Reading state information... Done
ros-galactic-rosbag2-storage-default-plugins is already the newest version (0.9.1-3focal.20220430.142028).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
which leads me to believe that this is not the problem for me... testing my hypothesis I try to run my bag again and get the same error as before.
I have no idea why this isn't working but if anyone has run into a similar problem or has any ideas on what might be going wrong I'd really appreciate it!
I am trying to run the example grakn migration "phone_calls" (using python and JSON files).
Before reaching there, I need to load the schema, but I am having trouble with getting the schema loaded, as shown here: https://dev.grakn.ai/docs/examples/phone-calls-schema
System:
-Mac OS 10.15
-grakn-core 1.8.3
-python 3.7.3
The grakn server is started. I checked and the 48555 TCP port is open, so I don't think there is any firewall issue. The schema file is in the same folder (phone_calls) as where the json data files is, for the next step. I am using a virtual environment. The error is below:
(project1_env) (base) tiffanytoor1#MacBook-Pro-2 onco % grakn server start
Storage is already running
Grakn Core Server is already running
(project1_env) (base) tiffanytoor1#MacBook-Pro-2 onco % grakn console --keyspace phone_calls --file phone_calls/schema.gql
Unable to create connection to Grakn instance at localhost:48555
Cause: io.grpc.StatusRuntimeException
UNKNOWN: Could not reach any contact point, make sure you've provided valid addresses (showing first 1, use getErrors() for more: Node(endPoint=/127.0.0.1:9042, hostId=null, hashCode=5f59fd46): com.datastax.oss.driver.api.core.connection.ConnectionInitException: [JanusGraph Session|control|connecting...] init query OPTIONS: error writing ). Please check server logs for the stack trace.
I would appreciate any help! Thanks!
Nevermind -- I found the solution, in case any one else runs into a similar problem. The server configuration file needs to be edited: point the data directory to your project data files (here: the phone_calls data files) & change the server IP address to your own.
I recently upgraded our artifactory repository from 2.6.5 to the current version 5.4.6.
However, something seems to have gone wrong in the process. There are some artifacts that throw a HTTP 500 error when attempting to access them. Here is an example using wget:
wget http://xyz.server.com:8081/artifactory/gradle/org/jfrog/buildinfo/build-info-extractor-gradle/2.0.12/build-info-extractor-gradle-2.0.12.pom
--2017-09-12 12:17:13--
http://xyz.server.com:8081/artifactory/gradle/org/jfrog/buildinfo/build-info-extractor-gradle/2.0.12/build-info-extractor-gradle-2.0.12.pom
Resolving xyz.server.com (xyz.server.com)... 10.125.1.28
Connecting to xyz.server.com (xyz.server.com)|10.125.1.28|:8081... connected.
HTTP request sent, awaiting response... 500 Internal Server Error
2017-09-12 12:17:13 ERROR 500: Internal Server Error.
I verified this by going to the artifactory site, browsing to the object in question, and trying to download it. The result was the following:
{
"errors" : [ {
"status" : 500,
"message" : "Could not process download request: Binary provider has no content for 'e52a9a9a58d6829b7b13dd841af4b027c88bb028'"
} ]
}
The problem seems to be in the final step of the upgrade process, upgrading from 3.9.5 to 5.4.6. The wget command above works on 3.9.5, but not on the 5.4.6 instance.
I found a reference to a "Zap Cache" function in older documentation and thought it might fix things, but I don't seem to be able to find that function in the current site.
Is anyone able to point me to: a way to fix this issue, or what I need to do/look for in the upgrade process in order to prevent it from occurring?
As a further data point, we're using an Oracle database for the full file store, if that matters in any way (using the tag: <chain template="full-db"> in binarystore.xml)
Thanks in advance....
I am running owncloudcmd to sync files from a local* path to an ownCloud/Nextcloud server, all running Debian 8. However it fails with the error:
[5] csync_statedb_query sqlite3_compile error: disk I/O error - on
query PRAGMA quick_check; [6] csync_statedb_load ERR: sqlite3
integrity check failed - bail out: disk I/O error. #### ERROR during
csync_update : "CSync failed to load the journal file. The journal
file is corrupted."
I am not very familiar with csync or sqlite so I am a bit in the dark and although I can find talk of this issue through googling, I can't find a fix. The data in this case can be dumped to start over so I'm happy to flush any database or anything else. I've trying removing the created csync and journal files assuming one of them was corrupted but it doesn't seem to change anything.
I have read talk about changing PRAGMA settings to ignore the error (or check) but I can't see how this is implemented either.
Is anyone able to show me how to clear out the corruption?
*the local file is a mounted path to an AWS S3 bucket but I think this is irrelevant because it is working on other systems fine.
I am getting started with Ethereum and building a Dapp (what the hell does this mean by the way?). On the basic installation of the application (https://github.com/ethereum/wiki/wiki/Dapp-using-Meteor#connect-your-%C3%90app), I get this error upon attempting to connect.
geth --rpc --rpccorsdomain "http://localhost:3000"
I0804 23:48:24.987448 ethdb/database.go:82] Alloted 128MB cache and 1024 file handles to /Users/( . )Y( . )/Library/Ethereum/chaindata
Fatal: Could not open database: resource temporarily unavailable
I literally just got started, I set up ethereum through homebrew and made an account with geth. Can't get past right here.
Thank you!
Your geth client is already running in the background. You can attach to it by typing:
$ geth attach
in your command line. This will allow you to run commands on the geth client console.