I'm managing an installation of OpenStack Juno (deployed with Mirantis 6.0) working with two nodes, one controller and one compute. We're doing some testing and for some reason (our IT team thinks is something related to HAproxy) Swift stopped working.
At the moment, almost everything works but can't create images or snapshots from instances. I can create snapshots from volumes and the other way around. The difference, I think, is the upload of the image, Glance fails to use Cinder to upload the newly created image.
I've been tasket with either repair Swift, or fallback to use Cinder as storage system. The first would be the best sollution but I have no idea how to start.
I'm pretty new to all this and I'm sorry I can't provide more details, I've started working with OpenStack a few weeks ago and still haven't got enough experience to troubleshot this problem myself.
All I could find on the logs are references to "Failed to upload..." like this one in /var/log/glance/api.log
2015-12-16 12:29:47.604 6182 ERROR glance.api.v1.upload_utils [-] Failed to upload image 1856c024-d75a-49e3-a6a9-dc3d7b15e8cc
2015-12-16 12:29:47.604 6182 TRACE glance.api.v1.upload_utils raise NotImplementedError
2015-12-16 12:29:47.604 6182 TRACE glance.api.v1.upload_utils NotImplementedError
2015-12-16 12:32:22.444 6198 ERROR glance.api.v2.image_data [-] Failed to upload image data due to internal error
2015-12-16 12:32:22.444 6198 TRACE glance.api.v2.image_data self.notifier.error('image.upload', msg)
self.notifier.error('image.upload', msg)
2015-12-16 12:39:08.768 6182 ERROR glance.api.v2.image_data [-] Failed to upload image data due to internal error
Thanks!
I found the solution by trial and error, it all comes down to glance-api.conf and I needed to add/modify these settings:
default_store = cinder
stores = glance.store.filesystem.Store,
glance.store.http.Store,
glance.store.cinder.Store,
glance.store.swift.Store,
filesystem_store_datadir = /var/lib/glance/images/
Related
Been attempting to deploy a new instance of a contract I've deployed before, from an environment I haven't used in a while (though which has successfully deployed before). None of the details of the contracts or my configuration have changed, and it appears that the http error is unrelated. What can I do to debug and resolve?
ProviderError: HttpProviderError
at HttpProvider.request (/home/user/Solidity/env/node_modules/hardhat/src/internal/core/providers/http.ts:78:19)
at LocalAccountsProvider.request (/home/user/Solidity/env/node_modules/hardhat/src/internal/core/providers/accounts.ts:187:34)
at processTicksAndRejections (node:internal/process/task_queues:95:5)
at ChainIdValidatorProvider._getChainIdFromEthNetVersion (/home/user/Solidity/env/node_modules/hardhat/src/internal/core/providers/chainId.ts:33:17)
at ChainIdValidatorProvider._getChainId (/home/user/Solidity/env/node_modules/hardhat/src/internal/core/providers/chainId.ts:17:25)
at ChainIdValidatorProvider.request (/home/user/Solidity/env/node_modules/hardhat/src/internal/core/providers/chainId.ts:55:29)
at EthersProviderWrapper.send (/home/user/Solidity/env/node_modules/#nomiclabs/hardhat-ethers/src/internal/ethers-provider-wrapper.ts:13:20)
at getSigners (/home/user/Solidity/env/node_modules/#nomiclabs/hardhat-ethers/src/internal/helpers.ts:45:20)
at getContractFactoryByAbiAndBytecode (/home/user/Solidity/env/node_modules/#nomiclabs/hardhat-ethers/src/internal/helpers.ts:288:21)
at main (/home/user/Solidity/env/scripts/deployVerify.js:6:27)
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
Found some great advice here: https://stackoverflow.com/a/74842616/14111336
Turns out my Alchemy RPC app had ceased working. I reset to a new app, and worked immediately as intended.
I have a mobile and web app that use firebase realtime database and there are some long running tasks which are served on servers with the help of firebase-queue and firebase-admin. The long running tasks is to find out who else is using the mobile app in a person's contact book. So, if you install the app, the app will send a task to the server with your contact book data and ask it to find the people in the contact book who are also using the app. Every now and then I see two types of errors in the logs. First error is below which also cause the node process to stop.
/Users/varungupta/Projects/myapp-server/node_modules/#firebase/database-compat/dist/index.standalone.js:15373
return queue[i].onComplete(new Error(abortReason), false, null);
^
Error: maxretry
at /Users/varungupta/Projects/myapp-server/node_modules/#firebase/database-compat/dist/index.standalone.js:15373:52
at exceptionGuard (/Users/varungupta/Projects/myapp-server/node_modules/#firebase/database-compat/dist/index.standalone.js:4018:9)
at repoRerunTransactionQueue (/Users/varungupta/Projects/myapp-server/node_modules/#firebase/database-compat/dist/index.standalone.js:15386:9)
at repoRerunTransactions (/Users/varungupta/Projects/myapp-server/node_modules/#firebase/database-compat/dist/index.standalone.js:15279:5)
at /Users/varungupta/Projects/myapp-server/node_modules/#firebase/database-compat/dist/index.standalone.js:15260:13
at /Users/varungupta/Projects/myapp-server/node_modules/#firebase/database-compat/dist/index.standalone.js:7061:17
at PersistentConnection.onDataMessage_ (/Users/varungupta/Projects/myapp-server/node_modules/#firebase/database-compat/dist/index.standalone.js:7088:17)
at Connection.onDataMessage_ (/Users/varungupta/Projects/myapp-server/node_modules/#firebase/database-compat/dist/index.standalone.js:5882:14)
at Connection.onPrimaryMessageReceived_ (/Users/varungupta/Projects/myapp-server/node_modules/#firebase/database-compat/dist/index.standalone.js:5876:18)
at WebSocketConnection.onMessage (/Users/varungupta/Projects/myapp-server/node_modules/#firebase/database-compat/dist/index.standalone.js:5778:27)
at WebSocketConnection.appendFrame_ (/Users/varungupta/Projects/myapp-server/node_modules/#firebase/database-compat/dist/index.standalone.js:4491:18)
at WebSocketConnection.handleIncomingFrame (/Users/varungupta/Projects/myapp-server/node_modules/#firebase/database-compat/dist/index.standalone.js:4539:22)
at Client.mySock.onmessage (/Users/varungupta/Projects/myapp-server/node_modules/#firebase/database-compat/dist/index.standalone.js:4438:19)
at Client.dispatchEvent (/Users/varungupta/Projects/myapp-server/node_modules/#firebase/database-compat/dist/index.standalone.js:2883:30)
at Client._receiveMessage (/Users/varungupta/Projects/myapp-server/node_modules/#firebase/database-compat/dist/index.standalone.js:3042:10)
at Client$2.<anonymous> (/Users/varungupta/Projects/myapp-server/node_modules/#firebase/database-compat/dist/index.standalone.js:2924:49)
at Client$2.emit (node:events:539:35)
at Client$2.emit (node:domain:475:12)
at Client$2.<anonymous> (/Users/varungupta/Projects/myapp-server/node_modules/#firebase/database-compat/dist/index.standalone.js:2186:14)
at pipe (/Users/varungupta/Projects/myapp-server/node_modules/#firebase/database-compat/dist/index.standalone.js:1503:40)
at Pipeline$1._loop (/Users/varungupta/Projects/myapp-server/node_modules/#firebase/database-compat/dist/index.standalone.js:1510:3)
at Pipeline$1.processIncomingMessage (/Users/varungupta/Projects/myapp-server/node_modules/#firebase/database-compat/dist/index.standalone.js:1479:8)
at Extensions$1.processIncomingMessage (/Users/varungupta/Projects/myapp-server/node_modules/#firebase/database-compat/dist/index.standalone.js:1645:20)
at Client$2._emitMessage (/Users/varungupta/Projects/myapp-server/node_modules/#firebase/database-compat/dist/index.standalone.js:2177:22)
at Client$2._emitFrame (/Users/varungupta/Projects/myapp-server/node_modules/#firebase/database-compat/dist/index.standalone.js:2137:19)
at Client$2.parse (/Users/varungupta/Projects/myapp-server/node_modules/#firebase/database-compat/dist/index.standalone.js:1863:18)
at Client$2.parse (/Users/varungupta/Projects/myapp-server/node_modules/#firebase/database-compat/dist/index.standalone.js:2369:60)
at IO.write (/Users/varungupta/Projects/myapp-server/node_modules/#firebase/database-compat/dist/index.standalone.js:186:16)
at TLSSocket.ondata (node:internal/streams/readable:754:22)
at TLSSocket.emit (node:events:527:28)
at TLSSocket.emit (node:domain:475:12)
at addChunk (node:internal/streams/readable:315:12)
at readableAddChunk (node:internal/streams/readable:289:9)
at TLSSocket.Readable.push (node:internal/streams/readable:228:10)
at TLSWrap.onStreamRead (node:internal/stream_base_commons:190:23)
There isn't much information about what exactly caused the maxretry error. The error happens at random after a few days of running the script. It doesn't happen right away.
The second error that I see that isn't as disruptive as the one above is
[2022-06-01T10:30:49.722Z] #firebase/database: FIREBASE WARNING: transaction at /queue/tasks/-N3TDQCAdt4y-akb0_MK failed: disconnect
This doesn't stop the node process and I can see that a transaction failed but not sure why did it disconnect and how can I resolve this problem.
I am using firebase-admin 9.6.0.
My Artifactory logs are showing the following errors with alarming frequency. The metadata service is up and healthy according to Artifactory, and aside from the log spam, it doesn't seem to be causing any problems. Does anyone have any ideas how to fix this?
[jfrt ] [ERROR] [af10ed1c492f4e88] [s.MetadataEventServiceImpl:346] [art-exec-6 ] - Unable to send statistics event to Metadata Server. Caught exception: Failed executing api/v1/stats, with response code: HTTP/1.1 500 Internal Server Error and response message: {"cause":"Internal error while processing request","message":"Failed to update stats with error couldn't find versionIDs for the given paths: couldn't find versionIDs for the given paths"}
Artifactory 7.27.10, running in Kubernetes
Using an external postgres 13 database
Using s3 as the storage backend
This is a known issue (documented internally as META-1180). This has been fixed and is released with Artifactory 7.29. This version of Artifactory is scheduled for release sometime over the next few weeks.
Am trying to generate the basic nodes- PartyA, PartyB and Notary on Ubuntu 14 by running ./gradlew deployNodes or even ./gradlew clean deployNodes. The error reads:
... still waiting. If this is taking longer than usual, check the node logs.
Error while generating node info file /cordapp-template-java/build/nodes/Notary/logs
Error while generating node info file /cordapp-template-java/build/nodes/PartyB/logs
Error while generating node info file /cordapp-template-java/build/nodes/PartyA/logs
Task :deployNodes FAILED
FAILURE: Build failed with an exception.
What went wrong:
Execution failed for task ':deployNodes'.
Error while generating node info file. Please check the logs in /cordapp-template-java/build/nodes/Notary/logs.
Error while generating node info file. Please check the logs in /cordapp-template-java/build/nodes/Notary/logs.
The error logs do not provide any indication of error.
I have personally run into the above question myself. From what I saw, it seems it was a random incident on the Unix based machine.
The issue was resolved after I moved the project to the different location. It is absurd. But I have never ran into this issue ever again.
we have problems with our artifactory server since this morning. When I try to restart Artifactory, we get this error:
2018-04-16 10:11:11,360 [art-init] [WARN ] (o.j.a.c.AccessClientHttpException:27) - Couldn't parse ErrorsModel from Access. Original message: Not Found
2018-04-16 10:11:37,420 [art-init] [ERROR] (o.a.w.s.ArtifactoryContextConfigListener:99) - Application could not be initialized: Waiting for access server to respond timed-out after 90303 milliseconds. java.lang.reflect.InvocationTargetException: null
Can anyone help, we have no idea what's wrong
Thanks
It seems there is an issue with the Access application, which is being started simultaneously with Artifactory.
You should fine relevant logs at the following log file: $ART_HOME/access/logs/access.log
the error message is somewhat misleading. In fact, there was a problem at the database level. the transaction log becomes full on our MSSQL database. We have increased the limit and are working to see how to reduce the size of the log.