I am able to launch a local DynamoDB server from bash through this command:
java -Djava.library.path=./DynamoDBLocal_lib -jar DynamoDBLocal.jar -sharedDb &
Is there not a pure-java way to start the server in one's code? I don't mean a java callout to the shell through the Process object but a way such that when I run my app, the server starts, and when my app is killed, the server is killed.
I can live with an embedded database if such a mode exists, though something that reflects server consistency semantics would be ideal.
EDIT: September 23rd 2015
There was an announcement on Aug 3, 2015 that now adds the ability to have an embedded DynamoDB local running in the same process. You can add a Maven test dependency and use one of the ways below to run it.
<!--Dependency:-->
<dependencies>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>DynamoDBLocal</artifactId>
<version>[1.11,2.0)</version>
</dependency>
</dependencies>
<!--Custom repository:-->
<repositories>
<repository>
<id>dynamodb-local-oregon</id>
<name>DynamoDB Local Release Repository</name>
<url>https://s3-us-west-2.amazonaws.com/dynamodb-local/release</url>
</repository>
</repositories>
And here is an example taken from the awslabs/aws-dynamodb-examples Github repository:
AmazonDynamoDB dynamodb = null;
try {
// Create an in-memory and in-process instance of DynamoDB Local that skips HTTP
dynamodb = DynamoDBEmbedded.create().amazonDynamoDB();
// use the DynamoDB API with DynamoDBEmbedded
listTables(dynamodb.listTables(), "DynamoDB Embedded");
} finally {
// Shutdown the thread pools in DynamoDB Local / Embedded
if(dynamodb != null) {
dynamodb.shutdown();
}
}
// Create an in-memory and in-process instance of DynamoDB Local that runs over HTTP
final String[] localArgs = { "-inMemory" };
DynamoDBProxyServer server = null;
try {
server = ServerRunner.createServerFromCommandLineArgs(localArgs);
server.start();
dynamodb = AmazonDynamoDBClientBuilder.standard().withEndpointConfiguration(
// we can use any region here
new AwsClientBuilder.EndpointConfiguration("http://localhost:8000", "us-west-2"))
.build();
// use the DynamoDB API over HTTP
listTables(dynamodb.listTables(), "DynamoDB Local over HTTP");
} finally {
// Stop the DynamoDB Local endpoint
if(server != null) {
server.stop();
}
}
Old answer
Like you said, there is currently no built-in way from DynamoDBLocal or the SDK to do this right now. It would be nice if there was an embedded DynamoDBLocal that you could start up in the same process.
Here is a simple workaround/solution using java.lang.Process to start it up and shut it down programmatically in case others are interested.
Documentation for DynamoDBLocal can be found here and here are the current definition of the arguments:
-inMemory — Run in memory, no file dump
-port 4000 — Communicate using port 4000.
-sharedDb — Use a single database file, instead of separate files for each credential and region
Note that this is using the most recent version of DynamoDBLocal as of August 5th, 2015.
final ProcessBuilder processBuilder = new ProcessBuilder("java",
"-Djava.library.path=./DynamoDBLocal_lib",
"-jar",
"DynamoDBLocal.jar",
"-sharedDb",
"-inMemory",
"-port",
"4000")
.inheritIO()
.directory(new File("/path/to/dynamo/db/local"));
final Process process = processBuilder.start();
Runtime.getRuntime().addShutdownHook(new Thread() {
#Override
public void run() {
System.out.println("Shutdown DynamoDBLocal");
process.destroy();
try {
process.waitFor(3, TimeUnit.SECONDS);
} catch (InterruptedException e) {
System.out.println("Process did not terminate after 3 seconds.");
}
System.out.println("DynamoDBLocal isAlive=" + process.isAlive());
}
});
// Do some stuff
Write a gradle task to extract the Dynamodb-Local zip and now you can use https://github.com/marcoVermeulen/gradle-spawn-plugin gradle plugin to launch the dynamodb local. It is very easy to use and no need to do any process builder magic.
Sample code -
// to start dynamodb-local
task launch(type: SpawnProcessTask) {
println("Launching....")
command "java -Djava.library.path=/location/to/dynamodb-local/DynamoDBLocal_lib -jar /location/to/dynamodb-local/DynamoDBLocal.jar -inMemory -delayTransientStatuses"
ready "Initializing DynamoDB Local"
}
// to stop dynamodb-local process
task stop(type: KillProcessTask)
Related
Corda open source on Linux. Node RPC SSL enabled. I am getting error "Failed to find a store at certificates\sslkeystore.jks". Any ideas? I have entered absolute path in keyStorePath.
You must follow the steps of this paragraph: https://docs.corda.net/clientrpc.html#wire-security which I detailed for you below.
When you enable RPC SSL, you must run this command one time (you will be asked to supply 2 new passwords):
java -jar corda.jar generate-rpc-ssl-settings
It will create the rpcsslkeystore.jks under certificates folder, and rpcssltruststore.jks under certificates/export folder.
Inside your node.conf supply the path and password of rpcsslkeystore.jks:
rpcSettings {
useSsl=true
ssl {
keyStorePath=${baseDirectory}/certificates/rpcsslkeystore.jks
keyStorePassword=password
}
standAloneBroker = false
address = "0.0.0.0:10003"
adminAddress = "0.0.0.0:10004"
}
Now if you have a webserver, inside NodeRPCConnection you must use the constructor that takes a ClientRpcSslOptions parameter:
// RPC SSL properties.
#Value("${config.rpc.ssl.truststorepath}")
private String trustStorePath;
#Value("${config.rpc.ssl.truststorepassword}")
private String trustStorePassword;
#PostConstruct
public void initialiseNodeRPCConnection() {
NetworkHostAndPort rpcAddress = new NetworkHostAndPort(host, rpcPort);
ClientRpcSslOptions clientRpcSslOptions = new ClientRpcSslOptions(Paths.get(trustStorePath),
trustStorePassword, "JKS");
CordaRPCClient rpcClient = new CordaRPCClient(rpcAddress, clientRpcSslOptions, null);
rpcConnection = rpcClient.start(username, password);
proxy = rpcConnection.getProxy();
}
We added above 2 extra attributes that you must now supply when starting the webserver, for that; modify your clients module build.gradle:
task runNodeServer(type: JavaExec, dependsOn: jar) {
classpath = sourceSets.main.runtimeClasspath
main = 'com.example.server.ServerKt'
args '--server.port=50005', '--config.rpc.host=localhost',
'--config.rpc.port=10005', '--config.rpc.username=user1', '--config.rpc.password=test',
'--config.rpc.ssl.truststorepath=/path-to-project/build/nodes/your-node/certificates/export/rpcssltruststore.jks',
'--config.rpc.ssl.truststorepassword=password'
}
If you're planning to connect to the node with a standalone shell, you must do something similar, but it didn't work for me; I reported the following bug: https://github.com/corda/corda/issues/5955
I want to connect and select database Sqlite on Mule AnypointStudio. But it error. Please help me. Thanks all.
No suitable driver found for jdbc:sqlite
here my code:
#Processor (name="select" ,friendlyName ="select")
public void select() {
ArrayList<Story> list = new ArrayList<Story>();
String sql = "select * from chat";
try (Connection conn = this.connect();
Statement stmt = conn.createStatement();
ResultSet rs = stmt.executeQuery(sql)){
// loop through the result set
while (rs.next()) {
Story s = new Story();
s.setStory(rs.getInt("id"), rs.getString("user_chat"),rs.getString("bot_chat"));
list.add(s);
}
} catch (SQLException | ClassNotFoundException e) {
System.out.println(e.getMessage());
}
for (int i =0 ; i < list.size(); i++){
System.out.print(list.get(i).GetID() +"| "+ list.get(i).GetUserChat() + "| "+ list.get(i).GetBotChat() +"\n" );
}
}
private Connection connect() throws ClassNotFoundException {
// SQLite connection string
Class.forName("org.sqlite.JDBC");
String url = "jdbc:sqlite:C:\\data.db";
Connection conn = null;
try {
conn = DriverManager.getConnection(url);
} catch (SQLException e) {
System.out.println(e.getMessage());
}
return conn;
}
}
Make sure You have valid jar/driver in your project classpath.
Open new Mule Project in Studio, and then follow these steps to add/create a datasource in mule flow:
a. Import the driver
b. Create a Datasource,
c. Create a Connector that uses our Datasource, and finally
d. Create a simple flow that uses our connector.
It seems you are missing driver jar in project classpath.
How to Import the Driver?
Once you have the jar file(you can download jar respective to sqllite from some repo ,eg- maven_repo), the next steps are very simple:
In the Package Explorer,
Right-click over the Project folder
Look in the menu for Build Path > Add External Archives…
Look for the jar file in your hard drive and click Open.
Now you should see in the package explorer that the jar file is present in “Referenced Libraries.”
This will allow you to create an insta
nce of the Object driver you will need.
It's probably a classpath issue. If you are using Maven with your project, simply add the dependency in your pom.xml (and right click on your project > Mule > Update project dependencies):
<dependency>
<groupId>org.xerial</groupId>
<artifactId>sqlite-jdbc</artifactId>
<version>3.20.1</version>
<scope>test</scope>
</dependency>
Make sure you understand how Maven works and how to manipulate your pom.xml file. Maven getting started and POM Introduction might help.
If you are not using Maven, you need to manually import the dependency in your classpath. #Malesh_Loya answer should help.
I just created a new deployment slot for my app, imported the publishing profile to Visual Studio, but after deployment I get this error message:
Error 8: An error occurred while creating the WebJob schedule: No website could be found which matches the WebSiteName [myapp__staging] and WebSiteUrl [http://myapp-staging.azurewebsites.net] supplied.
I have 2 webjobs, a continuous and a scheduled webjob.
I already signed in to the correct Azure account, as stated by this answer.
Will I need to set something else up in order to deploy my app to a staging Deployment Slot with webjobs?
My app is using ASP.NET, if it makes a difference?
There are a few quirks when using the Azure Scheduler. The recommendation is to use the new CRON support instead. You can learn more about it here and here.
Jeff,
As David suggested, you can/should migrate to the new CRON support. Here's an example. The WebJob will be deployed as a continuous WebJob.
Keep in mind that in order to use this you need to install the WebJobs package and extensions that are currently a prerelease. You can get them on Nuget.
Install-Package Microsoft.Azure.WebJobs -Pre
Install-Package Microsoft.Azure.WebJobs.Extensions -Pre
Also, as David suggested if you're not using the WebJobs SDK, you can also run this using a settings.job file. He provided an example here.
Program.cs
static void Main()
{
//Set up DI (In case you're using an IOC container)
var module = new CustomModule();
var kernel = new StandardKernel(module);
//Configure JobHost
var storageConnectionString = "your_connection_string";
var config = new JobHostConfiguration(storageConnectionString) { JobActivator = new JobActivator(kernel) };
config.UseTimers(); //Use this to use the CRON expression.
//Pass configuration to JobJost
var host = new JobHost(config);
// The following code ensures that the WebJob will be running continuously
host.RunAndBlock();
}
Function.cs
public class Functions
{
public void YourMethodName([TimerTrigger("00:05:00")] TimerInfo timerInfo, TextWriter log)
{
//This Job runs every 5 minutes.
//Do work here.
}
}
You can change the schedule in the TimerTrigger attribute.
UPDATE Added the webjob-publish-settings.json file
Here's an example of the webjob-publiss-settings.json
{
"$schema": "http://schemastore.org/schemas/json/webjob-publish-settings.json",
"webJobName": "YourWebJobName",
"startTime": null,
"endTime": null,
"jobRecurrenceFrequency": null,
"interval": null,
"runMode": "Continuous"
}
I am currently investigating the possibility of using a Java Web Service (as described by the Info*Engine documentation of Windchill) in order to retrieve information regarding parts. I am using Windchill version 10.1.
I have successfully deployed a web service, which I consume in a .Net application. Calls which do not try to access Windchill information complete successfully. However, when trying to retrieve part information, I get a wt.method.AuthenticationException.
Here is the code that runs within the webService (The web service method simply calls this method)
public static String GetOnePart(String partNumber) throws WTException
{
WTPart part=null;
RemoteMethodServer server = RemoteMethodServer.getDefault();
server.setUserName("theUsername");
server.setPassword("thePassword");
try {
QuerySpec qspec= new QuerySpec(WTPart.class);
qspec.appendWhere(new SearchCondition(WTPart.class,WTPart.NUMBER,SearchCondition.LIKE,partNumber),new int[]{0,1});
// This fails.
QueryResult qr=PersistenceHelper.manager.find((StatementSpec)qspec);
while(qr.hasMoreElements())
{
part=(WTPart) qr.nextElement();
partName = part.getName();
}
} catch (AuthenticationException e) {
// Exception caught here.
partName = e.toString();
}
return partName;
}
This code works in a command line application deployed on the server, but fails with a wt.method.AuthenticationException when performed from within the web service. I feel it fails because the use of RemoteMethodServer is not what I should be doing since the web service is within the MethodServer.
Anyhow, if anyone knows how to do this, it would be awesome.
A bonus question would be how to log from within the web service, and how to configure this logging.
Thank you.
You don't need to authenticate on the server side with this code
RemoteMethodServer server = RemoteMethodServer.getDefault();
server.setUserName("theUsername");
server.setPassword("thePassword");
If you have followed the documentation (windchill help center), your web service should be something annotated with #WebServices and #WebMethod(operationName="getOnePart") and inherit com.ptc.jws.servlet.JaxWsService
Also you have to take care to the policy used during deployment.
The default ant script is configured with
security.policy=userNameAuthSymmetricKeys
So you need to manage it when you consume your ws with .Net.
For logging events, you just need to call the log4j logger instantiated by default with $log.debug("Hello")
You can't pre-authenticate server side.
You can write the auth into your client tho. Not sure what the .Net equivilent is, but this works for Java clients:
private static final String USERNAME = "admin";
private static final String PASSWORD = "password";
static {
java.net.Authenticator.setDefault(new java.net.Authenticator() {
#Override
protected java.net.PasswordAuthentication getPasswordAuthentication() {
return new java.net.PasswordAuthentication(USERNAME, PASSWORD.toCharArray());
}
});
}
Is it possible to output the db migration to an SQL file instead of directly invoking database changes in flyway?
Most times this will not be needed as with Flyway the DB migrations themselves will already be written in SQL.
Yes it's possible and as far as I am concerned the feature is an absolute must for DBAs who don't want to allow flyway in prod.
I made do with modifying code from here, it's a dry run command for flyway, you can add a filewriter and write out migrationDetails:
https://github.com/killbill/killbill/commit/996a3d5fd096525689dced825eac7a95a8a7817e
I did it like so... Project structure (just copied it out of killbill's project and renamed package to flywaydr:
.
./main
./main/java
./main/java/com
./main/java/com/flywaydr
./main/java/com/flywaydr/CapturingMetaDataTable.java
./main/java/com/flywaydr/CapturingSqlMigrationExecutor.java
./main/java/com/flywaydr/DbMigrateWithDryRun.java
./main/java/com/flywaydr/MigrationInfoCallback.java
./main/java/com/flywaydr/Migrator.java
./main/java/org
./main/java/org/flywaydb
./main/java/org/flywaydb/core
./main/java/org/flywaydb/core/FlywayWithDryRun.java
In Migrator.java add (implement callback and put it in DbMigrateWithDryRun.java) :
} else if ("dryRunMigrate".equals(operation)) {
MigrationInfoCallback mcb = new MigrationInfoCallback();
flyway.dryRunMigrate();
MigrationInfoImpl[] migrationDetails = mcb.getPendingMigrationDetails();
if(migrationDetails.length>0){
writeMasterScriptToFile(migrationDetails);
}
}
Then to write stuff to file something like:
private static void writeMasterScriptToFile(MigrationInfoImpl[] migrationDetails){
FileWriter fw = null;
try{
String masterScriptLoc="path/to/file";
fw = new FileWriter(masterScriptLoc);
LOG.info("Writing output to " + masterScriptLoc);
for (final MigrationInfoImpl migration : migrationDetails){
Path file =Paths.get(migration.getResolvedMigration().getPhysicalLocation());
//if you want to copy actual script files parsed by flyway
Files.copy(file, Paths.get(new StringBuilder(scriptspathloc).append(File.separator).append(file.getFileName().toString()).toString()), REPLACE_EXISTING);
}
//or just get the sql
for (final SqlStatement sqlStatement : sqlStatements) {
//sqlStatement.getSql();
}
fw.write(stuff.toString());
} catch(Exception e){
LOG.error("Could not write to file, io exception was thrown.",e);
} finally{
try{fw.close();}catch(Exception e){LOG.error("Could not close file writer.",e);}
}
}
One last thing to mention, I compile and package this into a jar "with dependencies" (aka fatjar) via maven (google assembly plugin + jar with dependencies) and run it via command like below or you can include it as a dependency and call it via mvn exec:exec goal, which is something I had success with as well.
$ java -jar /path/to/flywaydr-fatjar.jar dryRunMigrate -regular.flyway.configs -etc -etc
I didnt find a way. Switched to mybatis migration. Looks quite nice.