Solr core creation in solr cloud (solr 4.1.0) - solrcloud

I am trying to create core dynamically through my java appliction in solr cloud having two shard.
CloudSolrServer cloudSolrServer = new CloudSolrServer("localhost:9983", new LBHttpSolrServer ("http://localhost:8983/solr"));
CoreAdminRequest.Create req = new CoreAdminRequest.Create() {
private static final long serialVersionUID = -8825247378713661625L;
#Override public SolrParams getParams() {
ModifiableSolrParams modifiableSolrParams = (ModifiableSolrParams) super.getParams();
modifiableSolrParams.set("collection.configName", "mycore");
return modifiableSolrParams;
}
};
req.setInstanceDir("/solr/master/mycorepath");
req.setCollection("mycore");
CoreAdminResponse res = req.process(cloudSolrServer.getLbServer());
However i am getting the error:
Specified config does not exist in ZooKeeper:mycore
When I checked in the solr admin console I found the collection "mycore" is not completely created[i.e it does not have the folder symbol] and there is no config with the name "mycore".
How do I go about this problem. What is the standard way for creating core dynamically in a 2 shard solr cloud (solr 4.1.0)?

I have successfully created collection with solr4.4 having two shards and no replicas using collections api
CloudSolrServer cs = new CloudSolrServer("localhost:9983");
CollectionAdminResponse res=CollectionAdminRequest.createCollection("testCollection", 2, 1, 1, "127.0.1.1:7574_solr,127.0.1.1:8983_solr", "myconf", null, cs);
This will create a collection with compositeID router
myconf is the configuration name that has been uploaded to zoo keeper during the cluster start up.

I too am using Solr 4.1.0, and was having some trouble creating a core. I'm only using one shard, but I answered another question along similar lines here.

Related

DynamoDb streams, just get new updates since

I'm trying to work with DynamoDb streams, I am using the example code shown in this article. I've modified it to work in a basic Spring Boot app (initializr), utilizing an existing DynamoDb table which has streams enabled. Everything appears to work, however; I'm not seeing any new updates.
This particular database has a bulk update once per day at a specific time, it may get some minor changes now and then during the day. I'm trying to monitor these minor updates. When I run the application I can see the records from the bulk update, however if my application is running and I use the AWS Console to modify, create or delete a record I don't seem to get any output.
I'm using:
Spring Boot:2.3.9.RELEASE
amazon-kinesis-client:1.14.2
Java 11
Running on Mac Catalina (though that shouldn't matter)
In my test application I did the following:
package com.test.dynamodb_streams_test_kcl.service;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDB;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBStreams;
import com.amazonaws.services.dynamodbv2.model.*;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.stereotype.Service;
import javax.annotation.PostConstruct;
import java.time.ZoneId;
import java.time.ZonedDateTime;
import java.util.List;
#Slf4j
#Service
#RequiredArgsConstructor
public class LowLevelKclProcessor {
private static final String dynamoDbTableName = "global-items";
private final AmazonDynamoDB dynamoDB;
private final AmazonDynamoDBStreams dynamoDBStreams;
private final ZonedDateTime startTime = ZonedDateTime.now();
#PostConstruct
public void initialize() {
log.info("Describing table={}", dynamoDbTableName);
DescribeTableResult itemTableDescription = dynamoDB.describeTable(dynamoDbTableName);
log.info("Got description");
String itemTableStreamArn = itemTableDescription.getTable().getLatestStreamArn();
log.info("Got stream arn ({}) for table={} tableArn={}", itemTableStreamArn,
itemTableDescription.getTable().getTableName(), itemTableDescription.getTable().getTableArn());
// Get all the shard IDs from the stream. Note that DescribeStream returns
// the shard IDs one page at a time.
String lastEvaluatedShardId = null;
do {
DescribeStreamResult describeStreamResult = dynamoDBStreams.describeStream(
new DescribeStreamRequest()
.withStreamArn(itemTableStreamArn)
.withExclusiveStartShardId(lastEvaluatedShardId));
List<Shard> shards = describeStreamResult.getStreamDescription().getShards();
// Process each shard on this page
for (Shard shard : shards) {
String shardId = shard.getShardId();
System.out.println("Shard: " + shard);
// Get an iterator for the current shard
GetShardIteratorRequest getShardIteratorRequest = new GetShardIteratorRequest()
.withStreamArn(itemTableStreamArn)
.withShardId(shardId)
.withShardIteratorType(ShardIteratorType.LATEST);
GetShardIteratorResult getShardIteratorResult =
dynamoDBStreams.getShardIterator(getShardIteratorRequest);
String currentShardIter = getShardIteratorResult.getShardIterator();
// Shard iterator is not null until the Shard is sealed (marked as READ_ONLY).
// To prevent running the loop until the Shard is sealed, which will be on average
// 4 hours, we process only the items that were written into DynamoDB and then exit.
int processedRecordCount = 0;
while (currentShardIter != null && processedRecordCount < 100) {
System.out.println(" Shard iterator: " + currentShardIter.substring(380));
// Use the shard iterator to read the stream records
GetRecordsResult getRecordsResult = dynamoDBStreams.getRecords(new GetRecordsRequest()
.withShardIterator(currentShardIter));
List<Record> records = getRecordsResult.getRecords();
for (Record record : records) {
// I set a breakpoint on the line below, but it was never hit after the bulk update info
if (startTime.isBefore(ZonedDateTime.ofInstant(record.getDynamodb()
.getApproximateCreationDateTime().toInstant(), ZoneId.systemDefault()))) {
System.out.println(" " + record.getDynamodb());
}
}
processedRecordCount += records.size();
currentShardIter = getRecordsResult.getNextShardIterator();
}
}
// If LastEvaluatedShardId is set, then there is
// at least one more page of shard IDs to retrieve
lastEvaluatedShardId = describeStreamResult.getStreamDescription().getLastEvaluatedShardId();
} while (lastEvaluatedShardId != null);
}
}
Note that your test is based on the low-level API, not on the Kenisis client library. So it's normal to have some tricky technical details to deal with.
Your test application has some similarities with the example given in the doc, but it has issues:
When I run the application I can see the records from the bulk update
ShardIteratorType.LATEST will not look for old records that happened before running the test (It starts reading just after the most recent stream records in the shard)
So, I will assume that the iterator type was different (ex: TRIM_HORIZON) and changed later to LATEST during your tests.
The main issue comes from the fact that your application will sequentially poll shards, and it will bloque in the first shard until it finds 100 new records in this shard (due to LATEST iterator type).
So, you may not see the new minor changes while the test is running if they belong to a different shard.
Solutions:
1- Poll shards in parallel using threads.
2- Filter returned shards using the sequence number of the last logged record, and try to guess the shard that may contain minor changes.
3- Dangerous & I'm not sure if it works :)
In a test table, and if your data model allows this: close the current stream, and enable a new one, then make sure that all your writes belong to one partition. In the majority of cases, table partitions have a one-to-one relationship with active shards. Theoretically, you have only one active shard to deal with.

Assigning content store using JAVA API

I had posting in alfresco hub , couldnt get solution yet.
I was trying to convert javascript API code to java API which move files to different content store(' storeB'). We have storeB defined in - 'content-store-selector-context.xml'. We are using Enterrprises version of Alfresco 5.2.
java script code as follows , - Works perfectly fine . its working code.
for each (var n in node.children) {
if (n.isDocument) {
//Apply script for moving files to DMS Store 01
n.removeAspect("cm:versionable");
n.addAspect("cm:storeSelector");
n.properties['cm:storeName'] = "storeB";
n.save();
}
}
Below is Java API Code - But this code is not moving files to 'storeB'. Is there anything i am missing ?
Is there any similiar method available in java APIs.
List<ChildAssociationRef> children = nodeService.getChildAssocs(dayFolderRef);
Map<QName, Serializable> aspectsProps = new HashMap<QName, Serializable>(1);
aspectsProps.put(ContentModel.PROP_STORE_NAME, "storeB");
LOG.info("Folder::" + dayFolderRef.getId());
LOG.info("Number of Subfolder to be moved is ::" + children.size());
for (ChildAssociationRef childAssoc : children) {
NodeRef childNodeRef = childAssoc.getChildRef();
if (ContentModel.TYPE_CONTENT.equals(nodeService.getType(childNodeRef))) {
LOG.info("Moving the file to secondary storae "+childNodeRef.getId());
nodeService.removeAspect(childNodeRef, ContentModel.ASPECT_VERSIONABLE);
nodeService.addAspect(childNodeRef, ContentModel.ASPECT_STORE_SELECTOR, aspectsProps);
}
}
I can see a save method is java script API. based on response recieved Alfresco forum , there is no save method in javascript API. Java API run in transactions , so will commit eventually . But i can see from DB using the below SQL -
SELECT count(*)
FROM alf_content_url
WHERE orphan_time IS NOT NULL;
the above SQL returns same count after executing the code , so no DB updates happening. Anything wrong ?
Any help , appreciated
Regards
Brijesh
I don't see why that would not work, are you positive you're even entering that method? Try adding the content store selector aspect with no properties map, then add the content store name property with setProperty method separately.

WCF Transaction with multiple inserts

When creating a user, entries are required in multiple tables. I am trying to create a transaction that creates a new entry into one table and then pass the new entityid into the parent table and so on. The error I am getting is
The transaction manager has disabled its support for remote/network
transactions. (Exception from HRESULT: 0x8004D024)
I believe this is caused by creating multiple connections within a single TransactionScope, but I am unsure on what the best/most efficient way of doing this is.
[OperationBehavior(TransactionScopeRequired = true)]
public int CreateUser(CreateUserData createData)
{
// Create a new family group and get the ID
var familyGroupId = createData.FamilyGroupId ?? CreateFamilyGroup();
// Create the APUser and get the Id
var apUserId = CreateAPUser(createData.UserId, familyGroupId);
// Create the institution user and get the Id
var institutionUserId = CreateInsUser(apUserId, createData.AlternateId, createData.InstitutionId);
// Create the investigator group user and return the Id
return AddUserToGroup(createData.InvestigatorGroupId, institutionUserId);
}
This is an example of one of the function calls, all the other ones follow the same format
public int CreateFamilyGroup(string familyGroupName)
{
var familyRepo = _FamilyRepo ?? new FamilyGroupRepository();
var familyGroup = new FamilyGroup() {CreationDate = DateTime.Now};
return familyRepo.AddFamilyGroup(familyGroup);
}
And the repository call for this is as follows
public int AddFamilyGroup(FamilyGroup familyGroup)
{
using (var context = new GameDbContext())
{
var newGroup = context.FamilyGroups.Add(familyGroup);
context.SaveChanges();
return newGroup.FamilyGroupId;
}
}
I believe this is caused by creating multiple connections within a single TransactionScope
Yes, that is the problem. It does not really matter how you avoid that as long you avoid it. A common thing to do is to have one connection and one EF context per WCF request. You need to find a way to pass that EF context along.
The method AddFamilyGroup illustrates a common anti-pattern with EF: You are using EF as a CRUD facility. It's supposed to me more like a live object graph connected to the database. The entire WCF request should share the same EF context. If you move in that direction the problem goes away.

Blackberry - Cannot create SQLite database

I am making an app that runs in the background, and starts on device boot.
I have read the docs, and have the SQLiteDemo files from RIM, and I am using them to try create a database on my SD Card in the simulator.
Unfortunately, I am getting this error:
DatabasePathException:Invalid path name. Path does not contains a proper root list. See FileSystemRegistry class for details.
Here's my code:
public static Database storeDB;
public static final String DATABASE_NAME = "testDB";
private String DATABASE_LOCATION = "file:///SDCard/Databases/MyDBFolder/";
public static URI dbURI;
dbURI = URI.create(DATABASE_LOCATION+DATABASE_NAME);
storeDB = DatabaseFactory.openOrCreate(dbURI);
I took out a try/catch for URI.create and DatabaseFactory.openOrCreate for the purposes of this post.
So, can anyone tell me why I can't create a database on my simulator?
If I load it up and go into media, I can create a folder manually. The SD card is pointing to a folder on my hard drive, and if I create a folder in there, it is shown on the simulator too, so I can create folders, just not programatically.
Also, I have tried this from the developer docs:
// Determine if an SDCard is present
boolean sdCardPresent = false;
String root = null;
Enumeration enum = FileSystemRegistry.listRoots();
while (enum.hasMoreElements())
{
root = (String)enum.nextElement();
System.err.println("root="+root);
if(root.equalsIgnoreCase("sdcard/"))
{
sdCardPresent = true;
}
}
But it only picks up store/ and never sdcard/.
Can anyone help?
Thanks.
FYI,
I think I resolved this.
The problem was I was trying to write to storage during boot-up, but the storage wasn't ready. Once the device/simulator was loaded, and a few of my listeners were triggered, the DB was created.
See here:
http://www.blackberry.com/knowledgecenterpublic/livelink.exe/fetch/2000/348583/800332/832062/How_To_-_Write_safe_initialization_code.html?nodeid=1487426&vernum=0

Upload files directly to Amazon S3 from ASP.NET application

My ASP.NET MVC application will take a lot of bandwidth and storage space. How can I setup an ASP.NET upload page so the file the user uploaded will go straight to Amazon S3 without using my web server's storage and bandwidth?
Update Feb 2016:
The AWS SDK can handle a lot more of this now. Check out how to build the form, and how to build the signature. That should prevent you from needing the bandwidth on your end, assuming you need to do no processing of the content yourself before sending it to S3.
If you need to upload large files and display a progress bar you should consider the flajaxian component.
It uses flash to upload files directly to amazon s3, saving your bandwidth.
The best and the easiest way to upload files to amazon S3 via asp.net . Have a look at following blog post by me . i think this one will help. Here i have explained from adding a S3 bucket to creating the API Key, Installing Amazon SDK and writing code to upload files. Following are are the sample code for uploading files to amazon S3 with asp.net C#.
using System
using System.Collections.Generic
using System.Linq
using System.Web
using Amazon
using Amazon.S3
using Amazon.S3.Transfer
///
/// Summary description for AmazonUploader
///
public class AmazonUploader
{
public bool sendMyFileToS3(System.IO.Stream localFilePath, string bucketName, string subDirectoryInBucket, string fileNameInS3)
{
// input explained :
// localFilePath = we will use a file stream , instead of path
// bucketName : the name of the bucket in S3 ,the bucket should be already created
// subDirectoryInBucket : if this string is not empty the file will be uploaded to
// a subdirectory with this name
// fileNameInS3 = the file name in the S3
// create an instance of IAmazonS3 class ,in my case i choose RegionEndpoint.EUWest1
// you can change that to APNortheast1 , APSoutheast1 , APSoutheast2 , CNNorth1
// SAEast1 , USEast1 , USGovCloudWest1 , USWest1 , USWest2 . this choice will not
// store your file in a different cloud storage but (i think) it differ in performance
// depending on your location
IAmazonS3 client = new AmazonS3Client("Your Access Key", "Your Secrete Key", Amazon.RegionEndpoint.USWest2);
// create a TransferUtility instance passing it the IAmazonS3 created in the first step
TransferUtility utility = new TransferUtility(client);
// making a TransferUtilityUploadRequest instance
TransferUtilityUploadRequest request = new TransferUtilityUploadRequest();
if (subDirectoryInBucket == "" || subDirectoryInBucket == null)
{
request.BucketName = bucketName; //no subdirectory just bucket name
}
else
{ // subdirectory and bucket name
request.BucketName = bucketName + #"/" + subDirectoryInBucket;
}
request.Key = fileNameInS3 ; //file name up in S3
//request.FilePath = localFilePath; //local file name
request.InputStream = localFilePath;
request.CannedACL = S3CannedACL.PublicReadWrite;
utility.Upload(request); //commensing the transfer
return true; //indicate that the file was sent
}
}
Here you can use the function sendMyFileToS3 to upload file stream to amazon S3.
For more details check my blog in the following link.
Upload File to Amazon S3 via asp.net
I hope the above mentioned link will help.
Look for a javascript library to handle the client side upload of these files. I stumbled upon a javascript and php example Dojo also seems to offer a clientside s3 file upload.
ThreeSharp is a library to facilitate interactions with Amazon S3 in a .NET environment.
You'll still need to host the logic to upload and send files to s3 in your mvc app, but you won't need to persist them on your server.
Save and GET data in aws s3 bucket in asp.net mvc :-
To save plain text data at amazon s3 bucket.
1.First you need a bucket created on aws than
2.You need your aws credentials like
a)aws key b) aws secretkey c) region
// code to save data at aws
// Note you can get access denied error. to remove this please check AWS account and give //read and write rights
Name space need to add from NuGet package
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
var credentials = new Amazon.Runtime.BasicAWSCredentials(awsKey, awsSecretKey);
try`
{
AmazonS3Client client = new AmazonS3Client(credentials, RegionEndpoint.APSouth1);
// simple object put
PutObjectRequest request = new PutObjectRequest()
{
ContentBody = "put your plain text here",
ContentType = "text/plain",
BucketName = "put your bucket name here",
Key = "1"
//put unique key to uniquly idenitify your data
// you can pass here any data with unique id like primary key
//in db
};
PutObjectResponse response = client.PutObject(request);
}
catch(exception ex)
{
//
}
Now go to your AWS account and check the bucket you can get data with "1" Name in the AWS s3 bucket.
Note:- if you get any other issue please ask me a question here will try to resolve it.
To get data from AWS s3 bucket:-
try
{
var credentials = new Amazon.Runtime.BasicAWSCredentials(awsKey, awsSecretKey);
AmazonS3Client client = new AmazonS3Client(credentials, RegionEndpoint.APSouth1);
GetObjectRequest request = new GetObjectRequest()
{
BucketName = bucketName,
Key = "1"// because we pass 1 as unique key while save
//data at the s3 bucket
};
using (GetObjectResponse response = client.GetObject(request))
{
StreamReader reader = new
StreamReader(response.ResponseStream);
vccEncryptedData = reader.ReadToEnd();
}
}
catch (AmazonS3Exception)
{
throw;
}

Resources