RocksDB: how to use ttl in java? - rocksdb

I am testing rocksdb for java api. I put a key-value entry into the map, and then wait for 20s, then get from map. Why the entry is not deleted?
this is my code:
import org.rocksdb.Options;
import org.rocksdb.RocksDBException;
import org.rocksdb.TtlDB;
public class Test2 {
public static void main(String[] args) throws Exception {
TtlDB.loadLibrary();
Options options = new Options()
.setCreateIfMissing(true);
TtlDB db = TtlDB.open(options, args[0],20,false);
if(args.length > 3 && "put".equals(args[1])) {
db.put(args[2].getBytes(), args[3].getBytes());
}
byte[] arr = db.get(args[2].getBytes());
if(arr != null) {
System.out.println(new String(arr));
} else {
System.out.println(arr);
}
System.out.println(db.get(args[2].getBytes()));
Thread.sleep(21000);
System.out.println(db.get(args[2].getBytes()));
db.close();
}
}

Expired TTL values are deleted in compaction only:(Timestamp + ttl < time_now)
https://github.com/facebook/rocksdb/wiki/Time-to-Live

Related

How to manage WebSocket objects that are no longer needed ASP.Net Core

I am using Asp.Net core 3.1 . If I want to create a WebSockets backend for example for
a chat app , I need to store all the related WebSocket objects for broadcasting events , my question is what is the best way to manage removing objects that are no longer useful (if disconnected or no longer open). keeping in mind that I want other parts of the application to access the WebScoket groups to also broadcast events if needed. I store the related connections in a ConnectionNode which is the nearest layer to the Websocket objects , a class called WebsocketsManager manage these nodes, a service in the background runs to clear the unused objects every timeout period. but since I want the group(related connections)to be accessible for the application (for example other endpoints); to avoid any concurrent modification errors , if a broadcast is required during the cleaning process,the broadcast will have to wait for the cleaning process to finish, thats why the WebsocketsManager if the related connections are larger than a certain limit it will divide them into multiple related ConnectionNodes , that way the cleaning process can continue partially for related connection while broadcasting if needed. I want to know how good my solution will behave or what is the best way to do it. any help would be really appreciated.
ConnectionNode
public class ConnectionNode
{
private List<WebSocket> connections;
private BroadcastQueue BroadcastQueue = new BroadcastQueue();
private bool isBroadCasting = false;
private bool isCleaning = false;
public void AddConnection(WebSocket socket)
{
if (connections == null)
connections = new List<WebSocket>();
connections.Add(socket);
}
public void Broadcast(Broadcast broadCast)
{
while (isCleaning)
{
}
BroadcastQueue.QueueBroadcast(broadCast);
if (isBroadCasting)
{
return;
}
isBroadCasting = true;
var broadcast = BroadcastQueue.GetNext();
while (broadCast != null)
{
foreach (var ws in connections)
{
broadCast.Dispatch(ws);
}
broadCast = BroadcastQueue.GetNext();
}
isBroadCasting = false;
}
public int CleanUnUsedConnections()
{
if (isBroadCasting)
return 0;
isCleaning = true;
var i =connections.RemoveAll(s => s.State != WebSocketState.Open);
isCleaning = false;
return i;
}
public int ConnectionsCount()
{
return connections.Count;
}
}
Manager class
public class WebSocketsManager
{
static int ConnectionNodesDividerLimit = 1000;
private ConcurrentDictionary<String, List<ConnectionNode>> mConnectionNodes;
private readonly ILogger<WebSocketsManager> logger;
public WebSocketsManager(ILogger<WebSocketsManager> logger)
{
this.logger = logger;
}
public ConnectionNode RequireNode(string Id)
{
if (mConnectionNodes == null)
mConnectionNodes = new ConcurrentDictionary<String, List<ConnectionNode>>();
var node = mConnectionNodes.GetValueOrDefault(Id);
if (node == null)
{
node = new List<ConnectionNode>();
node.Add(new ConnectionNode());
mConnectionNodes.TryAdd(Id, node);
return node[0];
}
if (ConnectionNodesDividerLimit != 0)
{
if (node[0].ConnectionsCount() == ConnectionNodesDividerLimit)
{
node.Insert(0,new ConnectionNode());
}
}
return node[0];
}
public void ClearUnusedConnections()
{
logger.LogInformation("Manager is Clearing ..");
if (mConnectionNodes == null)
return;
if (mConnectionNodes.IsEmpty)
{
logger.LogInformation("Empty ## Nothing to clear ..");
return;
}
Dictionary<String,ConnectionNode> ToBeRemovedNodes = new Dictionary<String, ConnectionNode>();
foreach (var pair in mConnectionNodes)
{
bool shoudlRemoveStack = true;
foreach (var node in pair.Value)
{
int i = node.CleanUnUsedConnections();
logger.LogInformation($"Removed ${i} from connection node(s){pair.Key}");
if (node.ConnectionsCount() == 0)
{
ToBeRemovedNodes[pair.Key] = node;
logger.LogInformation($"To be Removed A node From ..{pair.Key}");
}
else
{
shoudlRemoveStack = false;
}
}
if (shoudlRemoveStack)
{
ToBeRemovedNodes.Remove(pair.Key);
List<ConnectionNode> v =null;
var b = mConnectionNodes.TryRemove(pair.Key,out v);
logger.LogInformation($"Removing the Stack ..{pair.Key} Removed ${b}");
}
}
foreach (var pair in ToBeRemovedNodes)
{
mConnectionNodes[pair.Key].Remove(pair.Value);
logger.LogInformation($"Clearing Nodes : Clearing Nodes from stack #{pair.Key}");
}
}
public void Broadcast(string id, Broadcast broadcast)
{
var c = mConnectionNodes.GetValueOrDefault(id);
foreach (var node in c)
{
node.Broadcast(broadcast);
}
}
the service
public class SocketsConnectionsCleaningService : BackgroundService
{
private readonly IServiceProvider Povider;
private Timer Timer = null;
private bool isRunning = false;
private readonly ILogger Logger;
public SocketsConnectionsCleaningService(IServiceProvider Provider, ILogger<SocketsConnectionsCleaningService> Logger )
{
this.Povider = Provider;
this.Logger = Logger;
}
protected override Task ExecuteAsync(CancellationToken stoppingToken)
{
Logger.LogInformation("Execute Sync is called ");
Timer = new Timer(DeleteClosedConnections, null, TimeSpan.FromMinutes(0), TimeSpan.FromMinutes(2));
return Task.CompletedTask;
}
private void DeleteClosedConnections(object state)
{
Logger.LogInformation("Clearing ");
if (isRunning)
{
Logger.LogInformation("A Task is Running Return ");
return;
}
isRunning = true;
var connectionManager = Povider.GetService(typeof(WebSocketsManager)) as WebSocketsManager;
connectionManager.ClearUnusedConnections();
isRunning = false;
Logger.LogInformation($"Finished Cleaning !");
}
}
Usage in a controller be like
[HttpGet("ws")]
public async Task SomeRealtimeFunction()
{
if (HttpContext.IsWebSocketsRequest())
{
using var socket = await HttpContext.AcceptSocketRequest();
try
{
await socket.SendString(" Connected! ");
webSocketsManager.RequireNode("Chat Room")
.AddConnection(socket);
var RecieverHelper = socket.GetRecieveResultsHelper();
string str = await RecieverHelper.ReceiveString();
while (!RecieverHelper.Result.CloseStatus.HasValue)
{
webSocketsManager
.Broadcast("Chat Room", new StringBroadcast(str));
str = await RecieverHelper.ReceiveString();
}
}
catch (Exception e)
{
await socket.SendString("Error!");
await socket.SendString(e.Message);
await socket.SendString(e.ToString());
}
}
else
{
HttpContext.Response.StatusCode = 400;
}
}

Netty: TCP Client Server File transfer: Exception TooLongFrameException:

I am new to netty and I am trying to design a solution as below for transfer of file from Server to Client over TCP:
1. Zero copy based file transfer in case of non-ssl based transfer (Using default region of the file)
2. ChunkedFile transfer in case of SSL based transfer.
The Client - Server file transfer works in this way:
1. The client sends the location of the file to be transfered
2. Based on the location (sent by the client) the server transfers the file to the client
The file content could be anything (String /image /pdf etc) and any size.
Now, I get this TooLongFrameException: at the Server side, though the server is just decoding the path received from the client, for running the code mentioned below (Server/Client).
io.netty.handler.codec.TooLongFrameException: Adjusted frame length exceeds 65536: 215542494061 - discarded
at io.netty.handler.codec.LengthFieldBasedFrameDecoder.fail(LengthFieldBasedFrameDecoder.java:522)
at io.netty.handler.codec.LengthFieldBasedFrameDecoder.failIfNecessary(LengthFieldBasedFrameDecoder.java:500)
Now, My question is:
Am I wrong with the order of Encoders and Decoders and its configuration? If so, what is the correct way to configure it to receive a file from the server?
I went through few related StackOverflow posts SO Q1,SO Q2 , SO Q3, SO Q4. I got to know about the LengthFieldBasedDecoder, but I didn't get to know how to configure its corresponding LengthFieldPrepender at the Server (Encoding side). Is it even required at all?
Please point me into the right direction.
FileClient:
public final class FileClient {
static final boolean SSL = System.getProperty("ssl") != null;
static final int PORT = Integer.parseInt(System.getProperty("port", SSL ? "8992" : "8023"));
static final String HOST = System.getProperty("host", "127.0.0.1");
public static void main(String[] args) throws Exception {
// Configure SSL.
final SslContext sslCtx;
if (SSL) {
SelfSignedCertificate ssc = new SelfSignedCertificate();
sslCtx = SslContextBuilder.forServer(ssc.certificate(), ssc.privateKey()).build();
} else {
sslCtx = null;
}
// Configure the client
EventLoopGroup group = new NioEventLoopGroup();
try {
Bootstrap b = new Bootstrap();
b.group(group)
.channel(NioSocketChannel.class)
.option(ChannelOption.SO_KEEPALIVE, true)
.handler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
ChannelPipeline pipeline = ch.pipeline();
if (sslCtx != null) {
pipeline.addLast(sslCtx.newHandler(ch.alloc(), HOST, PORT));
}
pipeline.addLast("frameDecoder", new LengthFieldBasedFrameDecoder(64*1024, 0, 8));
pipeline.addLast("frameEncoder", new LengthFieldPrepender(4));
pipeline.addLast(new ObjectDecoder(ClassResolvers.cacheDisabled(null)));
pipeline.addLast(new ObjectEncoder());
pipeline.addLast( new FileClientHandler()); }
});
// Start the server.
ChannelFuture f = b.connect(HOST,PORT).sync();
// Wait until the server socket is closed.
f.channel().closeFuture().sync();
} finally {
// Shut down all event loops to terminate all threads.
group.shutdownGracefully();
}
}
}
FileClientHandler:
public class FileClientHandler extends ChannelInboundHandlerAdapter{
#Override
public void channelActive(ChannelHandlerContext ctx) {
String filePath = "/Users/Home/Documents/Data.pdf";
ctx.writeAndFlush(Unpooled.wrappedBuffer(filePath.getBytes()));
}
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
System.out.println("File Client Handler Read method...");
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
cause.printStackTrace();
}
}
FileServer:
/**
* Server that accept the path of a file and echo back its content.
*/
public final class FileServer {
static final boolean SSL = System.getProperty("ssl") != null;
static final int PORT = Integer.parseInt(System.getProperty("port", SSL ? "8992" : "8023"));
public static void main(String[] args) throws Exception {
// Configure SSL.
final SslContext sslCtx;
if (SSL) {
SelfSignedCertificate ssc = new SelfSignedCertificate();
sslCtx = SslContextBuilder.forServer(ssc.certificate(), ssc.privateKey()).build();
} else {
sslCtx = null;
}
// Configure the server.
EventLoopGroup bossGroup = new NioEventLoopGroup(1);
EventLoopGroup workerGroup = new NioEventLoopGroup();
try {
ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup).channel(NioServerSocketChannel.class)
.option(ChannelOption.SO_KEEPALIVE, true).handler(new LoggingHandler(LogLevel.INFO))
.childHandler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
ChannelPipeline pipeline = ch.pipeline();
if (sslCtx != null) {
pipeline.addLast(sslCtx.newHandler(ch.alloc()));
}
pipeline.addLast("frameDecoder",new LengthFieldBasedFrameDecoder(64*1024, 0, 8));
pipeline.addLast("frameEncoder", new LengthFieldPrepender(4));
pipeline.addLast(new ObjectDecoder(ClassResolvers.cacheDisabled(null)));
pipeline.addLast(new ObjectEncoder());
pipeline.addLast(new ChunkedWriteHandler());
pipeline.addLast(new FileServerHandler());
}
});
// Start the server.
ChannelFuture f = b.bind(PORT).sync();
// Wait until the server socket is closed.
f.channel().closeFuture().sync();
} finally {
bossGroup.shutdownGracefully();
workerGroup.shutdownGracefully();
}
}
}
FileServerHandler:
public class FileServerHandler extends ChannelInboundHandlerAdapter {
#Override
public void channelRead(ChannelHandlerContext ctx, Object obj) throws Exception {
RandomAccessFile raf = null;
long length = -1;
try {
ByteBuf buff = (ByteBuf)obj;
byte[] bytes = new byte[buff.readableBytes()];
buff.readBytes(bytes);
String msg = new String(bytes);
raf = new RandomAccessFile(msg, "r");
length = raf.length();
} catch (Exception e) {
ctx.writeAndFlush("ERR: " + e.getClass().getSimpleName() + ": " + e.getMessage() + '\n');
return;
} finally {
if (length < 0 && raf != null) {
raf.close();
}
}
if (ctx.pipeline().get(SslHandler.class) == null) {
// SSL not enabled - can use zero-copy file transfer.
ctx.writeAndFlush(new DefaultFileRegion(raf.getChannel(), 0, length));
} else {
// SSL enabled - cannot use zero-copy file transfer.
ctx.writeAndFlush(new ChunkedFile(raf));
}
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
cause.printStackTrace();
System.out.println("Exception server.....");
}
}
I referred Netty In Action and code samples from here
There are multiple things wrong with your server/client. First thing the SSL, for the client you don't need to initialize a SslContext for a server instead you would do something like this:
sslCtx = SslContextBuilder.forClient().trustManager(InsecureTrustManagerFactory.INSTANCE).build();
On the server side of things you use a SelfSignedCertificate which in itself isn't wrong but would like to remind you that it should only be used for debugging purposes and not in production. In addition you use the ChannelOption.SO_KEEPALIVE which isn't recommended since the keepalive interval is OS-dependent. Furthermore you added Object En-/Decoder to your pipeline which in your case don't do anything useful so you can remove them.
Also you configured your LengthFieldBasedFrameDecoder wrong due to an incomplete and wrong parameter list. In the netty docs you need the version of the constructor which defines the lengthFieldLength and initialBytesToStrip. Besides the not stripping the length field you also defined the wrong lengthFieldLength which should be the same as your LengthFieldPrepender's lengthFieldLength which is 4 bytes. In conlusion you could use the constructor like this:
new LengthFieldBasedFrameDecoder(64 * 1024, 0, 4, 0, 4)
In both your handler you don't specify a Charset when en-/decoding your String which could lead to problems because if no ´Charset´ is defined the systems default will be used which could vary. You could do something like this:
//to encode the String
string.getBytes(StandardCharsets.UTF_8);
//to decode the String
new String(bytes, StandardCharsets.UTF_8);
Additionally you tried to use the DefaultFileRegion if no SslHandler was added to the pipeline which would have been fine if you didn't added the LengthFieldHandler since they would need a memory copy of the byte[] to send to added the length field. Moreover I would recommend using the ChunkedNioFile instead of the ChunkedFile because it's nonblocking which is always a good thing. You would do this like that:
new ChunkedNioFile(randomAccessFile.getChannel())
One final thing on how to decode a ChunkedFile as it's split in chunks you can simply assamble them tougether with a simple OutputStream. Here's an old file handler of mine:
public class FileTransferHandler extends SimpleChannelInboundHandler<ByteBuf> {
private final Path path;
private final int size;
private final int hash;
private OutputStream outputStream;
private int writtenBytes = 0;
private byte[] buffer = new byte[0];
protected FileTransferHandler(Path path, int size, int hash) {
this.path = path;
this.size = size;
this.hash = hash;
}
#Override
protected void channelRead0(ChannelHandlerContext ctx, ByteBuf byteBuf) throws Exception {
if(this.outputStream == null) {
Files.createDirectories(this.path.getParent());
if(Files.exists(this.path))
Files.delete(this.path);
this.outputStream = Files.newOutputStream(this.path, StandardOpenOption.CREATE, StandardOpenOption.APPEND);
}
int size = byteBuf.readableBytes();
if(size > this.buffer.length)
this.buffer = new byte[size];
byteBuf.readBytes(this.buffer, 0, size);
this.outputStream.write(this.buffer, 0, size);
this.writtenBytes += size;
if(this.writtenBytes == this.size && MurMur3.hash(this.path) != this.hash) {
System.err.println("Received file has wrong hash");
return;
}
}
#Override
public void channelInactive(ChannelHandlerContext ctx) throws Exception {
if(this.outputStream != null)
this.outputStream.close();
}
}

Can Storm's HdfsBolt flush data after a timeout as well?

We are using Storm to process streaming data and store into HDFS. We have got everything to work but have one issue. I understand that we can specify the number of tuples after which the data gets flushed to HDFS using SyncPolicy, something like this below:
SyncPolicy syncPolicy = new CountSyncPolicy(Integer.parseInt(args[3]));
The question I have is can the data also be flushed after a timeout? For e.g. we have set the SyncPolicy above to 1000 tuples. If for whatever reason we get 995 tuples and then the data stops coming in for a while is there any way that storm can flush the 995 records to HDFS after a specified timeout (5 seconds)?
Thanks in advance for any help on this!
Shay
Yes, if you send a tick tuple to the HDFS bolt, it will cause the bolt to try to sync to the HDFS file system. All this happens in the HDFS bolt's execute function.
To configure tick tuples for your topology, in your topology config. In Java, to set that to every 300 seconds the code would look like:
Config topologyConfig = new Config();
topologyConfig.put(Config.TOPOLOGY_TICK_TUPLE_FREQ_SECS, 300);
StormSubmitter.submitTopology("mytopology", topologyConfig, builder.createTopology());
You'll have to adjust that last line depending on your circumstances.
There is an alternative solution for this problem,
First, lets clarify about sync policy, If your sync policy is 1000 ,then HdfsBolt only sync the data from 1000 tuple by calling hsync() method in execute() means it only clears the buffer by pushing data to disk, but for faster write disk may uses its cache and not writing to file directly.
The data is written to the file only when the size of data matches your rotation policy that need to specify at the time of bolt creation.
FileRotationPolicy rotationPolicy = new FileSizeRotationPolicy(100.0f, Units.KB);
So for flushing the record the to file after timeout, Seperate your tick tuple from normal tuples in excecute method and calculate the time difference of both tuple, If the diff is greater than timeout period then write the data to file.
By handling tick tuple differently you can also avoid the tick tuple frequency written to your file.
See the below code for better understanding:
public class CustomHdfsBolt1 extends AbstractHdfsBolt {
private static final Logger LOG = LoggerFactory.getLogger(CustomHdfsBolt1.class);
private transient FSDataOutputStream out;
private RecordFormat format;
private long offset = 0L;
private int tickTupleCount = 0;
private String type;
private long normalTupleTime;
private long tickTupleTime;
public CustomHdfsBolt1() {
}
public CustomHdfsBolt1(String type) {
this.type = type;
}
public CustomHdfsBolt1 withFsUrl(String fsUrl) {
this.fsUrl = fsUrl;
return this;
}
public CustomHdfsBolt1 withConfigKey(String configKey) {
this.configKey = configKey;
return this;
}
public CustomHdfsBolt1 withFileNameFormat(FileNameFormat fileNameFormat) {
this.fileNameFormat = fileNameFormat;
return this;
}
public CustomHdfsBolt1 withRecordFormat(RecordFormat format) {
this.format = format;
return this;
}
public CustomHdfsBolt1 withSyncPolicy(SyncPolicy syncPolicy) {
this.syncPolicy = syncPolicy;
return this;
}
public CustomHdfsBolt1 withRotationPolicy(FileRotationPolicy rotationPolicy) {
this.rotationPolicy = rotationPolicy;
return this;
}
public CustomHdfsBolt1 addRotationAction(RotationAction action) {
this.rotationActions.add(action);
return this;
}
protected static boolean isTickTuple(Tuple tuple) {
return tuple.getSourceComponent().equals(Constants.SYSTEM_COMPONENT_ID)
&& tuple.getSourceStreamId().equals(Constants.SYSTEM_TICK_STREAM_ID);
}
public void execute(Tuple tuple) {
try {
if (isTickTuple(tuple)) {
tickTupleTime = Calendar.getInstance().getTimeInMillis();
long timeDiff = normalTupleTime - tickTupleTime;
long diffInSeconds = TimeUnit.MILLISECONDS.toSeconds(timeDiff);
if (diffInSeconds > 5) { // specify the value you want.
this.rotateWithOutFileSize(tuple);
}
} else {
normalTupleTime = Calendar.getInstance().getTimeInMillis();
this.rotateWithFileSize(tuple);
}
} catch (IOException var6) {
LOG.warn("write/sync failed.", var6);
this.collector.fail(tuple);
}
}
public void rotateWithFileSize(Tuple tuple) throws IOException {
syncHdfs(tuple);
this.collector.ack(tuple);
if (this.rotationPolicy.mark(tuple, this.offset)) {
this.rotateOutputFile();
this.offset = 0L;
this.rotationPolicy.reset();
}
}
public void rotateWithOutFileSize(Tuple tuple) throws IOException {
syncHdfs(tuple);
this.collector.ack(tuple);
this.rotateOutputFile();
this.offset = 0L;
this.rotationPolicy.reset();
}
public void syncHdfs(Tuple tuple) throws IOException {
byte[] e = this.format.format(tuple);
synchronized (this.writeLock) {
this.out.write(e);
this.offset += (long) e.length;
if (this.syncPolicy.mark(tuple, this.offset)) {
if (this.out instanceof HdfsDataOutputStream) {
((HdfsDataOutputStream) this.out).hsync(EnumSet.of(SyncFlag.UPDATE_LENGTH));
} else {
this.out.hsync();
}
this.syncPolicy.reset();
}
}
}
public void closeOutputFile() throws IOException {
this.out.close();
}
public void doPrepare(Map conf, TopologyContext topologyContext, OutputCollector collector) throws IOException {
LOG.info("Preparing HDFS Bolt...");
this.fs = FileSystem.get(URI.create(this.fsUrl), this.hdfsConfig);
this.tickTupleCount = 0;
this.normalTupleTime = 0;
this.tickTupleTime = 0;
}
public Path createOutputFile() throws IOException {
Path path = new Path(this.fileNameFormat.getPath(),
this.fileNameFormat.getName((long) this.rotation, System.currentTimeMillis()));
this.out = this.fs.create(path);
return path;
}
}
You can directly use this class in your project.
Thanks,

How to import an xquery module using Saxon

I am having some troubles running an Xquery with Saxon9HE, which has a reference to an external module.
I would like Saxon to resolve the module with a relative path rather absolute.
the module declaration
module namespace common = "http://my-xquery-utils";
from the main xquery
import module namespace common = "http://my-xquery-utils" at "/home/myself/common.xquery";
from my java code
public class SaxonInvocator {
private static Processor proc = null;
private static XQueryEvaluator xqe = null;
private static DocumentBuilder db = null;
private static StaticQueryContext ctx = null;
/**
* Utility for debug, should not be called outside your IDE
*
* #param args xml, xqFile, xqParameter
*/
public static void main(String[] args) {
XmlObject instance = null;
try {
instance = XmlObject.Factory.parse(new File(args[0]));
} catch (XmlException ex) {
Logger.getLogger(SaxonInvocator.class.getName()).log(Level.SEVERE, null, ex);
} catch (IOException ex){
Logger.getLogger(SaxonInvocator.class.getName()).log(Level.SEVERE, null, ex);
}
System.out.print(transform(instance, args[1], args[2]));
}
public static String transform(XmlObject input, String xqFile, String xqParameter) {
String result = null;
try {
proc = new Processor(false);
proc.getUnderlyingConfiguration().getOptimizer().setOptimizationLevel(0);
ctx = proc.getUnderlyingConfiguration().newStaticQueryContext();
ctx.setModuleURIResolver(new ModuleURIResolver() {
#Override
public StreamSource[] resolve(String moduleURI, String baseURI, String[] locations) throws XPathException {
StreamSource[] modules = new StreamSource[locations.length];
for (int i = 0; i < locations.length; i++) {
modules[i] = new StreamSource(getResourceAsStream(locations[i]));
}
return modules;
}
});
db = proc.newDocumentBuilder();
XQueryCompiler comp = proc.newXQueryCompiler();
XQueryExecutable exp = comp.compile(getResourceAsStream(xqFile));
xqe = exp.load();
ByteArrayInputStream bais = new ByteArrayInputStream(input.xmlText().getBytes("UTF-8"));
StreamSource ss = new StreamSource(bais);
XdmNode node = db.build(ss);
xqe.setExternalVariable(
new QName(xqParameter), node);
result = xqe.evaluate().toString();
} catch (SaxonApiException e) {
e.printStackTrace();
} catch (UnsupportedEncodingException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
return result;
}
public static InputStream getResourceAsStream(String resource) {
InputStream stream = SaxonInvocator.class.getResourceAsStream("/" + resource);
if (stream == null) {
stream = SaxonInvocator.class.getResourceAsStream(resource);
}
if (stream == null) {
stream = SaxonInvocator.class.getResourceAsStream("my/project/" + resource);
}
if (stream == null) {
stream = SaxonInvocator.class.getResourceAsStream("/my/project/" + resource);
}
return stream;
}
}
If a change it into a relative path like
import module namespace common = "http://my-xquery-utils" at "common.xquery";
I get
Error on line 22 column 1
XQST0059: java.io.FileNotFoundException
I am not sure how the ModuleURIResolver should be used.
Saxon questions are best asked on the Saxon forum at http://saxonica.plan.io - questions asked here will probably be noticed eventually but sometimes, like this time, they aren't our first priority.
The basic answer is that for the relative URI to resolve, the base URI needs to be known, which means that you need to ensure that the baseURI property in the XQueryCompiler is set. This happens automatically if you compile the query from a File, but not if you compile it from an InputStream.
If you don't know a suitable base URI to set, the alternative is to write a ModuleURIResolver, which could for example fetch the module by making another call on getResourceAsStream().

Need Deadlock in web application

for some reason I want to check how the deadlock occurred in web application , so that's why I used the code below , but when I deploy the web application and test it , I did not get in the deadlock situation !! any help
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import javax.servlet.ServletException;
import java.util.ArrayList;
import java.io.IOException;
import java.io.PrintWriter;
public class DeadLockServlet extends HttpServlet
{
public static ArrayList student = new ArrayList();
public static ArrayList employee = new ArrayList();
PrintWriter out;
#Override
protected void service(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException
{
String lsAction = request.getParameter("action");
String lsValue = request.getParameter("data");
out = response.getWriter();
String msg = "";
if (lsAction != null)
{
if (lsAction.equals("addStudent"))
{
addStudent(lsValue);
msg = "Student added: "+lsValue;
}
else if (lsAction.equals("addEmployee"))
{
addEmployee(lsValue);
msg = "Employee added: "+lsValue;
}
}
else
{
msg = "Invalid Request";
}
request.setAttribute("msg", msg);
request.setAttribute("student", student);
request.setAttribute("employee", employee);
request.getRequestDispatcher("index.jsp").forward(request, response);
}
public void addStudent(String lsValue)
{
synchronized (employee)
{
synchronized (student)
{
if (lsValue != null && !lsValue.equals(""))
{
student.add(lsValue);
}
}
}
}
public void addEmployee(String lsValue)
{
synchronized (student)
{
synchronized (employee)
{
if (lsValue != null && !lsValue.equals(""))
{
employee.add(lsValue);
}
}
}
}
}
Although your lock ordering will cause a deadlock, it requires exact timing. I don't think it's possible for you to do that without running that web server in the debugger and stepping through the code.
If you don't want to or can't do that, add a sufficiently long sleep between the two synchronized blocks in both addStudent and addEmployee. Say, 1 minute each. Then hit the servlet from two clients at around the same time, one doing addStudent, the other doing addEmployee.
The 'student' request handling thread will get the 'employee' lock, then sit and wait a bit. The 'employee' request handling thread will get the 'student' lock, then sit and wait. Then one of them will wake up, and try to grab the other lock, and the other thread will do the reverse, and presto: deadlock.
The add*** methods should be changed to something like this:
public void addStudent(String lsValue)
{
synchronized (employee)
{
Thread.sleep(60 * 1000);
synchronized (student)
{
if (lsValue != null && !lsValue.equals(""))
{
student.add(lsValue);
}
}
}
}
public void addEmployee(String lsValue)
{
synchronized (student)
{
Thread.sleep(60 * 1000);
synchronized (employee)
{
if (lsValue != null && !lsValue.equals(""))
{
employee.add(lsValue);
}
}
}
}
Obviously deadlocks are to be avoided, but hey, that's a whole nother question!

Resources