Solr Cloud 4.3 all instances return 404 result - solrcloud

I have a Solr Cloud with 1 external zookeeper where it has 2 collections and each collection is divided into 2 shards as follows:
solr#1 --> collection 1 - shard 1, collection 2 - shard 1
solr#2 --> collection 1 - shard 2, collection 2 - shard 2
Everything was running fine in Solr 4.2.1 until i upgraded(setup from scratch) to 4.3.1. All the setting were kept the same but now any query to cloud returns 404 error. However all the shards do appear to Admin UI Solr Cloud section. Any reason why? Did anything got changed in 4.3 ?
`host:port/solr/collection1/select?q=*&wt=xml&indent=true`
Result:
Server at host:port/collection1 returned non ok status:404, message:Not Found
</str>
<int name="code">404</int>
</lst>
</response>

For other people who encounter this..
I also had the same problem and it seems it was because the default value of hostContext in solr.xml should not be empty as before but now should be "solr": in solr.xml
Change
hostContext="${hostContext:}"
to
hostContext="${hostContext:solr}"

Related

Setting offer throughput or autopilot on container is not supported for serverless accounts. Azure Cosmos DB migration tool

After Selecting all the details correct in Migration tool returns the error related throughput value and having 0 or -1 does not help.
Workaround to migrate data using tool is to just create the collection first in Azure portal Cosmos DB and then run the Migration tool with same details then it will add all the rows to the same collection created. Main issue here was creation of new collection but i do not know why it returns something related to throughput which i think does not have anything to do with it.
In general, setting an explicitly value for offer_throughput is not allowed for serverless type of accounts. So either you omit that value and it will be applied by default one or you change your account type.
Related issues (still opened as of 23/02/2022):
https://learn.microsoft.com/en-us/answers/questions/94814/cosmos-quick-start-gt-create-items-contaner-gt-htt.html
https://github.com/Azure/azure-cosmos-dotnet-v2/issues/861

Solr Fields, set 'stored=false' still appearing in query response

Solr Fields in schema (Solr v6.6), marked as 'stored=false' still appearing in query response
Did we reindex the document after changing the schmea , I hope you are aware for solr cloud we need to do up config. if you are using solr cloud, then do up config of your schema into zookeeper and restart the solr and then try.

DynamoDB limitations when deploying MoonMail

I'm trying to deploy MoonMail on AWS. However, I receive this exception from CloudFormation:
Subscriber limit exceeded: Only 10 tables can be created, updated, or deleted simultaneously
Is there another way to deploy without opening support case and asking them to remove my limit?
This is an AWS limit for APIs: (link)
API-Specific Limits
CreateTable/UpdateTable/DeleteTable
In general, you can have up to 10
CreateTable, UpdateTable, and DeleteTable requests running
simultaneously (in any combination). In other words, the total number
of tables in the CREATING, UPDATING or DELETING state cannot exceed
10.
The only exception is when you are creating a table with one or more
secondary indexes. You can have up to 5 such requests running at a
time; however, if the table or index specifications are complex,
DynamoDB might temporarily reduce the number of concurrent requests
below 5.
You could try to open a support request to AWS to raise this limit for your account, but I don't feel this is necessary. It seems that you could create the DynamoDB tables a priori, using the AWS CLI or AWS SDK, and use MoonMail with read-only access to those tables. Using the SDK (example), you could create those tables sequentially, without reaching this simultaneously creation limit.
Another option, is to edit the s-resources-cf.json file to include only 10 tables and deploy. After that, add the missing tables and deploy again.
Whatever solution you apply, consider creating an issue ticket in MoonMail's repo, because as it stands now, it does not work in a first try (there are 12 tables in the resources file).

Using the Nexus3 API how do I get a list of artifacts in a repository

We are migrating from Nexus Repository Manager 2.1.4 to Nexus 3.1.0-04. With version 2 we have been able to use the API to get a list of artifacts by repository, however we are struggling to find a way to do this with the Nexus 3 API.
Having read https://books.sonatype.com/nexus-book/reference3/scripting.html chapter 16 we have been able to get artifact information for a specific blob using a groovy script like:
import org.sonatype.nexus.blobstore.api.BlobId
def properties = blobStore.blobStoreManager.get("default").get(new BlobId("7f6379d32f8dd78f98b5b181166703b6")).getProperties()
return [headers: properties.headers, metrics: properties.metrics]
However we can't find a way to iterate over the contents of a blob store. We can get a blob store object:
blobStore.blobStoreManager.get("default")
however the API does not appear to give us a way to get a list of all blobs within that store. We need to get a list of the blobIDs within a blob store.
Is there a way to do this via the Nexus 3 API?
One of our internal team members put this together. It doesn't use the blobStore but accomplishes I believe what you are trying to do (and a bit more): https://gist.github.com/kellyrob99/2d1483828c5de0e41732327ded3ab224
For some background, think of a blobStore as just where we store the bits, with no information about them. OrientDB has Component/Asset records and stores all the info about them. You'll generally want to use that instead of the blobStore for Asset information as a result.
Once your migration is done, it can be worth to investigate to update your version of Nexus.
That way, you will be able to use the - still in beta - new API for Nexus. It's available by default on the version 3.3.0 and more: http://localhost:8082/swagger-ui/
Basically, you retrieve the json output from this URL: http://localhost:8082/service/siesta/rest/beta/assets?repositoryId=YOURREPO
Only 10 records will be displayed at a time and you will have to use the continuationToken provided to request the next 10 records for your repository by calling: http://localhost:8082/service/siesta/rest/beta/assets?continuationToken=46525652a978be9a87aa345bdb627d12&repositoryId=YOURREPO
More information here: http://blog.sonatype.com/nexus-repository-new-beta-rest-api-for-content

Redis only using 1 key in database

I have setup Redis as a caching mechanism on my server with a Wordpress site. Basically on each request I check if a cache of the page exists and then I show the cache.
I'm using Predis (https://github.com/nrk/predis) as an interface to the redis database.
When I get the info from the usage of Redis however, I only see 1 key used in the system:
used_memory:103810376
used_memory_human:99.00M
used_memory_rss:106680320
used_memory_peak:222011768
used_memory_peak_human:211.73M
mem_fragmentation_ratio:1.03
mem_allocator:jemalloc-2.2.5
loading:0
aof_enabled:0
changes_since_last_save:8
bgsave_in_progress:0
last_save_time:1396168319
bgrewriteaof_in_progress:0
total_connections_received:726918
total_commands_processed:1240245
expired_keys:22
evicted_keys:0
keyspace_hits:1158841
keyspace_misses:699
pubsub_channels:0
pubsub_patterns:0
latest_fork_usec:21712
vm_enabled:0
role:master
db0:keys=1,expires=0
How could this be? I expect to see more keys listed, as each cached copy of the html of a page should have it's own key?
What am I missing here?
Without looking at the technical implementation, it could be several things.
1) The pages didn't get a hit, so they are not in the cache
2) The keys expired already
3) The mechanism uses for example a HSET , where you can have N key/values registered under 1 main key. You can check this by using the TYPE redis command on the single key you've got.

Resources