In Producer/Consumer, why does switching the order of up(mutex) and up(fill) result in deadlock? - deadlock

Let's use the code on the wikipedia page as the example.
semaphore mutex = 1;
semaphore fillCount = 0;
semaphore emptyCount = BUFFER_SIZE;
procedure producer() {
while (true) {
item = produceItem();
down(emptyCount);
down(mutex);
putItemIntoBuffer(item);
up(mutex);
up(fillCount);
}
}
procedure consumer() {
while (true) {
down(fillCount);
down(mutex);
item = removeItemFromBuffer();
up(mutex);
up(emptyCount);
consumeItem(item);
}
}
Why does swapping the position of up(mutex) and up(fillCount) in the producer function guarantee deadlock? I'm trying to come up with an instance, but can't seem to find any.

Related

ConcurrentModificationException when reinserting a node JavaFX [duplicate]

We all know you can't do the following because of ConcurrentModificationException:
for (Object i : l) {
if (condition(i)) {
l.remove(i);
}
}
But this apparently works sometimes, but not always. Here's some specific code:
public static void main(String[] args) {
Collection<Integer> l = new ArrayList<>();
for (int i = 0; i < 10; ++i) {
l.add(4);
l.add(5);
l.add(6);
}
for (int i : l) {
if (i == 5) {
l.remove(i);
}
}
System.out.println(l);
}
This, of course, results in:
Exception in thread "main" java.util.ConcurrentModificationException
Even though multiple threads aren't doing it. Anyway.
What's the best solution to this problem? How can I remove an item from the collection in a loop without throwing this exception?
I'm also using an arbitrary Collection here, not necessarily an ArrayList, so you can't rely on get.
Iterator.remove() is safe, you can use it like this:
List<String> list = new ArrayList<>();
// This is a clever way to create the iterator and call iterator.hasNext() like
// you would do in a while-loop. It would be the same as doing:
// Iterator<String> iterator = list.iterator();
// while (iterator.hasNext()) {
for (Iterator<String> iterator = list.iterator(); iterator.hasNext();) {
String string = iterator.next();
if (string.isEmpty()) {
// Remove the current element from the iterator and the list.
iterator.remove();
}
}
Note that Iterator.remove() is the only safe way to modify a collection during iteration; the behavior is unspecified if the underlying collection is modified in any other way while the iteration is in progress.
Source: docs.oracle > The Collection Interface
And similarly, if you have a ListIterator and want to add items, you can use ListIterator#add, for the same reason you can use Iterator#remove — it's designed to allow it.
In your case you tried to remove from a list, but the same restriction applies if trying to put into a Map while iterating its content.
This works:
Iterator<Integer> iter = l.iterator();
while (iter.hasNext()) {
if (iter.next() == 5) {
iter.remove();
}
}
I assumed that since a foreach loop is syntactic sugar for iterating, using an iterator wouldn't help... but it gives you this .remove() functionality.
With Java 8 you can use the new removeIf method. Applied to your example:
Collection<Integer> coll = new ArrayList<>();
//populate
coll.removeIf(i -> i == 5);
Since the question has been already answered i.e. the best way is to use the remove method of the iterator object, I would go into the specifics of the place where the error "java.util.ConcurrentModificationException" is thrown.
Every collection class has a private class which implements the Iterator interface and provides methods like next(), remove() and hasNext().
The code for next looks something like this...
public E next() {
checkForComodification();
try {
E next = get(cursor);
lastRet = cursor++;
return next;
} catch(IndexOutOfBoundsException e) {
checkForComodification();
throw new NoSuchElementException();
}
}
Here the method checkForComodification is implemented as
final void checkForComodification() {
if (modCount != expectedModCount)
throw new ConcurrentModificationException();
}
So, as you can see, if you explicitly try to remove an element from the collection. It results in modCount getting different from expectedModCount, resulting in the exception ConcurrentModificationException.
You can either use the iterator directly like you mentioned, or else keep a second collection and add each item you want to remove to the new collection, then removeAll at the end. This allows you to keep using the type-safety of the for-each loop at the cost of increased memory use and cpu time (shouldn't be a huge problem unless you have really, really big lists or a really old computer)
public static void main(String[] args)
{
Collection<Integer> l = new ArrayList<Integer>();
Collection<Integer> itemsToRemove = new ArrayList<>();
for (int i=0; i < 10; i++) {
l.add(Integer.of(4));
l.add(Integer.of(5));
l.add(Integer.of(6));
}
for (Integer i : l)
{
if (i.intValue() == 5) {
itemsToRemove.add(i);
}
}
l.removeAll(itemsToRemove);
System.out.println(l);
}
In such cases a common trick is (was?) to go backwards:
for(int i = l.size() - 1; i >= 0; i --) {
if (l.get(i) == 5) {
l.remove(i);
}
}
That said, I'm more than happy that you have better ways in Java 8, e.g. removeIf or filter on streams.
Same answer as Claudius with a for loop:
for (Iterator<Object> it = objects.iterator(); it.hasNext();) {
Object object = it.next();
if (test) {
it.remove();
}
}
With Eclipse Collections, the method removeIf defined on MutableCollection will work:
MutableList<Integer> list = Lists.mutable.of(1, 2, 3, 4, 5);
list.removeIf(Predicates.lessThan(3));
Assert.assertEquals(Lists.mutable.of(3, 4, 5), list);
With Java 8 Lambda syntax this can be written as follows:
MutableList<Integer> list = Lists.mutable.of(1, 2, 3, 4, 5);
list.removeIf(Predicates.cast(integer -> integer < 3));
Assert.assertEquals(Lists.mutable.of(3, 4, 5), list);
The call to Predicates.cast() is necessary here because a default removeIf method was added on the java.util.Collection interface in Java 8.
Note: I am a committer for Eclipse Collections.
Make a copy of existing list and iterate over new copy.
for (String str : new ArrayList<String>(listOfStr))
{
listOfStr.remove(/* object reference or index */);
}
People are asserting one can't remove from a Collection being iterated by a foreach loop. I just wanted to point out that is technically incorrect and describe exactly (I know the OP's question is so advanced as to obviate knowing this) the code behind that assumption:
for (TouchableObj obj : untouchedSet) { // <--- This is where ConcurrentModificationException strikes
if (obj.isTouched()) {
untouchedSet.remove(obj);
touchedSt.add(obj);
break; // this is key to avoiding returning to the foreach
}
}
It isn't that you can't remove from the iterated Colletion rather that you can't then continue iteration once you do. Hence the break in the code above.
Apologies if this answer is a somewhat specialist use-case and more suited to the original thread I arrived here from, that one is marked as a duplicate (despite this thread appearing more nuanced) of this and locked.
With a traditional for loop
ArrayList<String> myArray = new ArrayList<>();
for (int i = 0; i < myArray.size(); ) {
String text = myArray.get(i);
if (someCondition(text))
myArray.remove(i);
else
i++;
}
ConcurrentHashMap or ConcurrentLinkedQueue or ConcurrentSkipListMap may be another option, because they will never throw any ConcurrentModificationException, even if you remove or add item.
Another way is to use a copy of your arrayList just for iteration:
List<Object> l = ...
List<Object> iterationList = ImmutableList.copyOf(l);
for (Object curr : iterationList) {
if (condition(curr)) {
l.remove(curr);
}
}
A ListIterator allows you to add or remove items in the list. Suppose you have a list of Car objects:
List<Car> cars = ArrayList<>();
// add cars here...
for (ListIterator<Car> carIterator = cars.listIterator(); carIterator.hasNext(); )
{
if (<some-condition>)
{
carIterator().remove()
}
else if (<some-other-condition>)
{
carIterator().add(aNewCar);
}
}
Now, You can remove with the following code
l.removeIf(current -> current == 5);
I know this question is too old to be about Java 8, but for those using Java 8 you can easily use removeIf():
Collection<Integer> l = new ArrayList<Integer>();
for (int i=0; i < 10; ++i) {
l.add(new Integer(4));
l.add(new Integer(5));
l.add(new Integer(6));
}
l.removeIf(i -> i.intValue() == 5);
Java Concurrent Modification Exception
Single thread
Iterator<String> iterator = list.iterator();
while (iterator.hasNext()) {
String value = iter.next()
if (value == "A") {
list.remove(it.next()); //throws ConcurrentModificationException
}
}
Solution: iterator remove() method
Iterator<String> iterator = list.iterator();
while (iterator.hasNext()) {
String value = iter.next()
if (value == "A") {
it.remove()
}
}
Multi thread
copy/convert and iterate over another one collection. For small collections
synchronize[About]
thread safe collection[About]
I have a suggestion for the problem above. No need of secondary list or any extra time. Please find an example which would do the same stuff but in a different way.
//"list" is ArrayList<Object>
//"state" is some boolean variable, which when set to true, Object will be removed from the list
int index = 0;
while(index < list.size()) {
Object r = list.get(index);
if( state ) {
list.remove(index);
index = 0;
continue;
}
index += 1;
}
This would avoid the Concurrency Exception.
for (Integer i : l)
{
if (i.intValue() == 5){
itemsToRemove.add(i);
break;
}
}
The catch is the after removing the element from the list if you skip the internal iterator.next() call. it still works! Though I dont propose to write code like this it helps to understand the concept behind it :-)
Cheers!
Example of thread safe collection modification:
public class Example {
private final List<String> queue = Collections.synchronizedList(new ArrayList<String>());
public void removeFromQueue() {
synchronized (queue) {
Iterator<String> iterator = queue.iterator();
String string = iterator.next();
if (string.isEmpty()) {
iterator.remove();
}
}
}
}
I know this question assumes just a Collection, and not more specifically any List. But for those reading this question who are indeed working with a List reference, you can avoid ConcurrentModificationException with a while-loop (while modifying within it) instead if you want to avoid Iterator (either if you want to avoid it in general, or avoid it specifically to achieve a looping order different from start-to-end stopping at each element [which I believe is the only order Iterator itself can do]):
*Update: See comments below that clarify the analogous is also achievable with the traditional-for-loop.
final List<Integer> list = new ArrayList<>();
for(int i = 0; i < 10; ++i){
list.add(i);
}
int i = 1;
while(i < list.size()){
if(list.get(i) % 2 == 0){
list.remove(i++);
} else {
i += 2;
}
}
No ConcurrentModificationException from that code.
There we see looping not start at the beginning, and not stop at every element (which I believe Iterator itself can't do).
FWIW we also see get being called on list, which could not be done if its reference was just Collection (instead of the more specific List-type of Collection) - List interface includes get, but Collection interface does not. If not for that difference, then the list reference could instead be a Collection [and therefore technically this Answer would then be a direct Answer, instead of a tangential Answer].
FWIWW same code still works after modified to start at beginning at stop at every element (just like Iterator order):
final List<Integer> list = new ArrayList<>();
for(int i = 0; i < 10; ++i){
list.add(i);
}
int i = 0;
while(i < list.size()){
if(list.get(i) % 2 == 0){
list.remove(i);
} else {
++i;
}
}
One solution could be to rotate the list and remove the first element to avoid the ConcurrentModificationException or IndexOutOfBoundsException
int n = list.size();
for(int j=0;j<n;j++){
//you can also put a condition before remove
list.remove(0);
Collections.rotate(list, 1);
}
Collections.rotate(list, -1);
Try this one (removes all elements in the list that equal i):
for (Object i : l) {
if (condition(i)) {
l = (l.stream().filter((a) -> a != i)).collect(Collectors.toList());
}
}
You can use a while loop.
Iterator<Map.Entry<String, String>> iterator = map.entrySet().iterator();
while(iterator.hasNext()){
Map.Entry<String, String> entry = iterator.next();
if(entry.getKey().equals("test")) {
iterator.remove();
}
}
I ended up with this ConcurrentModificationException, while iterating the list using stream().map() method. However the for(:) did not throw the exception while iterating and modifying the the list.
Here is code snippet , if its of help to anyone:
here I'm iterating on a ArrayList<BuildEntity> , and modifying it using the list.remove(obj)
for(BuildEntity build : uniqueBuildEntities){
if(build!=null){
if(isBuildCrashedWithErrors(build)){
log.info("The following build crashed with errors , will not be persisted -> \n{}"
,build.getBuildUrl());
uniqueBuildEntities.remove(build);
if (uniqueBuildEntities.isEmpty()) return EMPTY_LIST;
}
}
}
if(uniqueBuildEntities.size()>0) {
dbEntries.addAll(uniqueBuildEntities);
}
If using HashMap, in newer versions of Java (8+) you can select each of 3 options:
public class UserProfileEntity {
private String Code;
private String mobileNumber;
private LocalDateTime inputDT;
// getters and setters here
}
HashMap<String, UserProfileEntity> upMap = new HashMap<>();
// remove by value
upMap.values().removeIf(value -> !value.getCode().contains("0005"));
// remove by key
upMap.keySet().removeIf(key -> key.contentEquals("testUser"));
// remove by entry / key + value
upMap.entrySet().removeIf(entry -> (entry.getKey().endsWith("admin") || entry.getValue().getInputDT().isBefore(LocalDateTime.now().minusMinutes(3)));
The best way (recommended) is use of java.util.concurrent package. By
using this package you can easily avoid this exception. Refer
Modified Code:
public static void main(String[] args) {
Collection<Integer> l = new CopyOnWriteArrayList<Integer>();
for (int i=0; i < 10; ++i) {
l.add(new Integer(4));
l.add(new Integer(5));
l.add(new Integer(6));
}
for (Integer i : l) {
if (i.intValue() == 5) {
l.remove(i);
}
}
System.out.println(l);
}
Iterators are not always helpful when another thread also modifies the collection. I had tried many ways but then realized traversing the collection manually is much safer (backward for removal):
for (i in myList.size-1 downTo 0) {
myList.getOrNull(i)?.also {
if (it == 5)
myList.remove(it)
}
}
In case ArrayList:remove(int index)- if(index is last element's position) it avoids without System.arraycopy() and takes not time for this.
arraycopy time increases if(index decreases), by the way elements of list also decreases!
the best effective remove way is- removing its elements in descending order:
while(list.size()>0)list.remove(list.size()-1);//takes O(1)
while(list.size()>0)list.remove(0);//takes O(factorial(n))
//region prepare data
ArrayList<Integer> ints = new ArrayList<Integer>();
ArrayList<Integer> toRemove = new ArrayList<Integer>();
Random rdm = new Random();
long millis;
for (int i = 0; i < 100000; i++) {
Integer integer = rdm.nextInt();
ints.add(integer);
}
ArrayList<Integer> intsForIndex = new ArrayList<Integer>(ints);
ArrayList<Integer> intsDescIndex = new ArrayList<Integer>(ints);
ArrayList<Integer> intsIterator = new ArrayList<Integer>(ints);
//endregion
// region for index
millis = System.currentTimeMillis();
for (int i = 0; i < intsForIndex.size(); i++)
if (intsForIndex.get(i) % 2 == 0) intsForIndex.remove(i--);
System.out.println(System.currentTimeMillis() - millis);
// endregion
// region for index desc
millis = System.currentTimeMillis();
for (int i = intsDescIndex.size() - 1; i >= 0; i--)
if (intsDescIndex.get(i) % 2 == 0) intsDescIndex.remove(i);
System.out.println(System.currentTimeMillis() - millis);
//endregion
// region iterator
millis = System.currentTimeMillis();
for (Iterator<Integer> iterator = intsIterator.iterator(); iterator.hasNext(); )
if (iterator.next() % 2 == 0) iterator.remove();
System.out.println(System.currentTimeMillis() - millis);
//endregion
for index loop: 1090 msec
for desc index: 519 msec---the best
for iterator: 1043 msec
you can also use Recursion
Recursion in java is a process in which a method calls itself continuously. A method in java that calls itself is called recursive method.

Retouching several images in several Task

Generalities : explanations about my program and its functioning
I am working on a photo-retouching JavaFX application. The final user can load several images. When he clicks on the button REVERSE, a Task is launched for each image using an Executor. Each of these Task executes the reversal algorithm : it fills an ArrayBlockingQueue<Pixel> (using add method).
When the final user clicks on the button REVERSE, as I said, these Task are launched. But just after these statements, I tell the JavaFX Application Thread to draw the Pixel of the ArrayBlockingQueue<Pixel> (using remove method).
Thus, there are parallelism and concurrency (solved by the ArrayBlockingQueue<Pixel>) between the JavaFX Application Thread and the Task, and between the Task themselves.
To draw the Pixel of the ArrayBlockingQueue<Pixel>, the JavaFX Application Thread starts an AnimationTimer. The latter contains the previously-mentionned remove method. This AnimationTimer is started for each image.
I think you're wondering yourself how this AnimationTimer can know to what image belongs the Pixel it has removed ? In fact, each Pixel has an attribute writable_image that specifies the image to what it belongs.
My problems
Tell me if I'm wrong, but my program should work. Indeed :
My JavaFX Application Thread is the only thread that change the GUI (and it's required in JavaFX) : the Task just do the calculations.
There is not concurrency, thanks to the BlockingQueue I use (in particular, there isn't possibility of draining).
The AnimationTimer knows to what image belongs each Pixel.
However, it's (obviously !) not the case (otherwise I wouldn't have created this question haha !).
My problem is that my JavaFX Application freezes (first problem), after having drawn only some reversed pixels (not all the pixels). On the last loaded image moreover (third problem).
A detail that could be the problems' cause
But I would need your opinion.
The AnimationTimer of course doesn't draw the reversed pixels of each image directly : this is animated. The final user can see each pixel of an image being reversed, little by little. It's very practical in other algorithms as the creation of a circle, because the user can "look" how the algorithm works.
But to do that, the AnimationTimer needs to read a variable called max. This variable is modified (writen) in... each Task. But it's an AtomicLong. So IF I AM NOT WRONG, there isn't any problem of concurrency between the Task themselves, or between the JavaFX Application Thread and these Task.
However, it could be the problem : indeed, the max's value could be 2000 in Task n°1 (= in image n°1), and 59 in Task n°2 (= in image n°2). The problem is the AnimationTimer must use 2000 for the image n°1, and 59 for the n°2. But if the Task n°1 et n°2 have finished, the only value known by the AnimationTimer would be 59...
Sources
When the user clicks on the button REVERSE
We launch the several Task and start several times the AnimationTimer. CLASS : RightPane.java
WritableImage current_writable_image;
for(int i = 0; i < this.gui.getArrayListImageViewsImpacted().size(); i++) {
current_writable_image = (WritableImage) this.gui.getArrayListImageViewsImpacted().get(i).getImage();
this.gui.getGraphicEngine().executor.execute(this.gui.getGraphicEngine().createTask(current_writable_image));
}
for(int i = 0; i < this.gui.getArrayListImageViewsImpacted().size(); i++) {
current_writable_image = (WritableImage) this.gui.getArrayListImageViewsImpacted().get(i).getImage();
this.gui.getImageAnimation().setWritableImage(current_writable_image);
this.gui.getImageAnimation().startAnimation();
}
The Task are part of the CLASS GraphicEngine, which contains an Executor :
public final Executor executor = Executors.newCachedThreadPool(runnable -> {
Thread t = new Thread(runnable);
t.setDaemon(true);
return t ;
});
public Task createTask(WritableImage writable_image) {
int image_width = (int) writable_image.getWidth(), image_height = (int) writable_image.getHeight();
Task ret = new Task() {
protected Void call() {
switch(operation_to_do) {
case "reverse" :
gui.getImageAnimation().setMax(image_width*image_height); // USE OF "MAX" VARIABLE
reverseImg(writable_image);
break;
}
return null;
}
};
return ret;
}
The same CLASS, GraphicEngine, also contains the reversal algorithm :
private void reverseImg(WritableImage writable_image) {
int image_width = (int) writable_image.getWidth(), image_height = (int) writable_image.getHeight();
BlockingQueue<Pixel> updates = gui.getUpdates();
PixelReader pixel_reader = writable_image.getPixelReader();
double[] rgb_reversed;
for (int x = 0; x < image_width; x++) {
for (int y = 0; y < image_height; y++) {
rgb_reversed = PhotoRetouchingFormulas.reverse(pixel_reader.getColor(x, y).getRed(), pixel_reader.getColor(x, y).getGreen(), pixel_reader.getColor(x, y).getBlue());
updates.add(new Pixel(x, y, Color.color(rgb_reversed[0], rgb_reversed[1], rgb_reversed[2], pixel_reader.getColor(x, y).getOpacity()), writable_image));
}
}
}
Finally, here is the code of the CLASS AnimationTimer. There is nothing particular. Note the variable max is used here too (and in the CLASS GraphicEngine : setMax).
public class ImageAnimation extends AnimationTimer {
private Gui gui;
private AtomicLong max, speed, max_delay;
private long count, start;
private WritableImage writable_image;
ImageAnimation (Gui gui) {
this.gui = gui;
this.count = 0;
this.start = -1;
this.max = new AtomicLong(Long.MAX_VALUE);
this.max_delay = new AtomicLong(999_000_000);
this.speed = new AtomicLong(this.max_delay.get());
}
public void setMax(long max) {
this.max.set(max);
}
public void setSpeed(long speed) { this.speed.set(speed); }
public double getMaxDelay() { return this.max_delay.get(); }
#Override
public void handle(long timestamp) {
if (start < 0) {
start = timestamp ;
return ;
}
ArrayList<Pixel> list_sorted_pixels = new ArrayList<>();
BlockingQueue<Pixel> updates = this.gui.getUpdates();
for(Pixel new_pixel : updates) {
if(new_pixel.getWritableImage() == writable_image) {
list_sorted_pixels.add(new_pixel);
}
}
while (list_sorted_pixels.size() > 0 && timestamp - start > (count * this.speed.get()) / (writable_image.getWidth()) && !updates.isEmpty()) {
Pixel update = list_sorted_pixels.remove(0);
updates.remove(update);
count++;
if (update.getX() >= 0 && update.getY() >= 0) {
writable_image.getPixelWriter().setColor(update.getX(), update.getY(), update.getColor());
}
}
if (count >= max.get()) {
this.count = 0;
this.start = -1;
this.max.set(Long.MAX_VALUE);
stop();
}
}
public void setWritableImage(WritableImage writable_image) { this.writable_image = writable_image; }
public void startAnimation() {
this.start();
}
}

Network timeout using node.js client at about 5 seconds

Note, this question was previously very different. This is now the real issue. Which is...
When making a call to executeStoredProcedure() using the node.js client I get a 408 code, RequestTimeout and I get no data back from the sproc's "body". This seems to occur at about 5 seconds, but when I time bound things from inside the sproc itself, any value over say 700 milliseconds causes me to get a network timeout (although I don't see it until about 5 seconds have passed).
Note, I can have longer running sprocs with read operations. This only seems to occur when I have a lot of createDocument() operations, so I don't think it's on the client side. I think something is happening on the server side.
It's still possible that my original thought is true and I'm not getting a false back from a createDocument() call which causes my sproc to keep running past its timeout and that's what's causing the 408.
Here is the time limited version of my create documents sproc
generateData = function(memo) {
var collection, collectionLink, nowTime, row, startTime, timeout;
if ((memo != null ? memo.remaining : void 0) == null) {
throw new Error('generateData must be called with an object containing a `remaining` field.');
}
if (memo.totalCount == null) {
memo.totalCount = 0;
}
memo.countForThisRun = 0;
timeout = memo.timeout || 600; // Works at 600. Fails at 800.
startTime = new Date();
memo.stillTime = true;
collection = getContext().getCollection();
collectionLink = collection.getSelfLink();
memo.stillQueueing = true;
while (memo.remaining > 0 && memo.stillQueueing && memo.stillTime) {
row = {
a: 1,
b: 2
};
getContext().getResponse().setBody(memo);
memo.stillQueueing = collection.createDocument(collectionLink, row);
if (memo.stillQueueing) {
memo.remaining--;
memo.countForThisRun++;
memo.totalCount++;
}
nowTime = new Date();
memo.nowTime = nowTime;
memo.startTime = startTime;
memo.stillTime = (nowTime - startTime) < timeout;
if (memo.stillTime) {
memo.continuation = null;
} else {
memo.continuation = 'Value does not matter';
}
}
getContext().getResponse().setBody(memo);
return memo;
};
The stored procedure above queues document creates in a while loop until the API returns false.
Keep in mind that createDocument() is an asynchronous method. The boolean returned represents whether it is time to wrap up execution right there and then. The return value isn't "smart" enough to estimate and account for how much time the async call will take; so it can't be used for queueing a bunch of calls in a while() loop.
As a result, the stored procedure above doesn't terminate gracefully when the boolean returns false because it has a bunch of createDocument() calls that are still running. The end result is a timeout (which eventually leads to blacklisting on repeated attempts).
In short, avoid this pattern:
while (stillQueueing) {
stillQueueing = collection.createDocument(collectionLink, row);
}
Instead, you should use the callback for control flow. Here is the refactored code:
function(memo) {
var collection = getContext().getCollection();
var collectionLink = collection.getSelfLink();
var row = {
a: 1,
b: 2
};
if ((memo != null ? memo.remaining : void 0) == null) {
throw new Error('generateData must be called with an object containing a `remaining` field.');
}
if (memo.totalCount == null) {
memo.totalCount = 0;
}
memo.countForThisRun = 0;
createMemo();
function createMemo() {
var isAccepted = collection.createDocument(collectionLink, row, function(err, createdDoc) {
if (err) throw err;
memo.remaining--;
memo.countForThisRun++;
memo.totalCount++;
if (memo.remaining > 0) {
createMemo();
} else {
getContext().getResponse().setBody(memo);
}
});
if (!isAccepted) {
getContext().getResponse().setBody(memo);
}
}
};

Raven DB DocumentStore - throws out of memory exception

I have code like this:
public bool Set(IEnumerable<WhiteForest.Common.Entities.Projections.RequestProjection> requests)
{
var documentSession = _documentStore.OpenSession();
//{
try
{
foreach (var request in requests)
{
documentSession.Store(request);
}
//requests.AsParallel().ForAll(x => documentSession.Store(x));
documentSession.SaveChanges();
documentSession.Dispose();
return true;
}
catch (Exception e)
{
_log.LogDebug("Exception in RavenRequstRepository - Set. Exception is [{0}]", e.ToString());
return false;
}
//}
}
This code gets called many times. After i get to around 50,000 documents that have passed through it i get an OutOfMemoryException.
Any idea why ? perhaps after a while i need to declare a new DocumentStore ?
thank you
**
UPDATE:
**
I ended up using the Batch/Patch API to perform the update I needed.
You can see the discussion here: https://groups.google.com/d/topic/ravendb/3wRT9c8Y-YE/discussion
Basically since i only needed to update 1 property on my objects, and after considering ayendes comments about re-serializing all the objects back to JSON, i did something like this:
internal void Patch()
{
List<string> docIds = new List<string>() { "596548a7-61ef-4465-95bc-b651079f4888", "cbbca8d5-be45-4e0d-91cf-f4129e13e65e" };
using (var session = _documentStore.OpenSession())
{
session.Advanced.DatabaseCommands.Batch(GenerateCommands(docIds));
}
}
private List<ICommandData> GenerateCommands(List<string> docIds )
{
List<ICommandData> retList = new List<ICommandData>();
foreach (var item in docIds)
{
retList.Add(new PatchCommandData()
{
Key = item,
Patches = new[] { new Raven.Abstractions.Data.PatchRequest () {
Name = "Processed",
Type = Raven.Abstractions.Data.PatchCommandType.Set,
Value = new RavenJValue(true)
}}});
}
return retList;
}
Hope this helps ...
Thanks alot.
I just did this for my current project. I chunked the data into pieces and saved each chunk in a new session. This may work for you, too.
Note, this example shows chunking by 1024 documents at a time, but needing at least 2000 before we decide it's worth chunking. So far, my inserts got the best performance with a chunk size of 4096. I think that's because my documents are relatively small.
internal static void WriteObjectList<T>(List<T> objectList)
{
int numberOfObjectsThatWarrantChunking = 2000; // Don't bother chunking unless we have at least this many objects.
if (objectList.Count < numberOfObjectsThatWarrantChunking)
{
// Just write them all at once.
using (IDocumentSession ravenSession = GetRavenSession())
{
objectList.ForEach(x => ravenSession.Store(x));
ravenSession.SaveChanges();
}
return;
}
int numberOfDocumentsPerSession = 1024; // Chunk size
List<List<T>> objectListInChunks = new List<List<T>>();
for (int i = 0; i < objectList.Count; i += numberOfDocumentsPerSession)
{
objectListInChunks.Add(objectList.Skip(i).Take(numberOfDocumentsPerSession).ToList());
}
Parallel.ForEach(objectListInChunks, listOfObjects =>
{
using (IDocumentSession ravenSession = GetRavenSession())
{
listOfObjects.ForEach(x => ravenSession.Store(x));
ravenSession.SaveChanges();
}
});
}
private static IDocumentSession GetRavenSession()
{
return _ravenDatabase.OpenSession();
}
Are you trying to save it all in one call?
The DocumentSession need to turn all of the objects that you pass it into a single request to the server. That means that it may allocate a lot of memory for the write to the server.
Usually we recommend on batches of about 1,024 items in you are doing bulks saves.
DocumentStore is a disposable class, so I worked around this problem by disposing the instance after each chunk. I highly doubt this is the most efficient way to run operations, but it will prevent significant memory overhead from happening.
I was running a sort of "delete all" operation like so. You can see the using blocks disposing both the DocumentStore and the IDocumentSession objects after each chunk.
static DocumentStore GetDataStore()
{
DocumentStore ds = new DocumentStore
{
DefaultDatabase = "test",
Url = "http://localhost:8080"
};
ds.Initialize();
return ds;
}
static IDocumentSession GetDbInstance(DocumentStore ds)
{
return ds.OpenSession();
}
static void Main(string[] args)
{
do
{
using (var ds = GetDataStore())
using (var db = GetDbInstance(ds))
{
//The `Take` operation will cap out at 1,024 by default, per Raven documentation
var list = db.Query<MyClass>().Skip(deleteSum).Take(5000).ToList();
deleteCount = list.Count;
deleteSum += deleteCount;
foreach (var item in list)
{
db.Delete(item);
}
db.SaveChanges();
list.Clear();
}
} while (deleteCount > 0);
}

Lock out users after three unsuccessful attempts without using a database

I want to lock out my users after three unsuccessfull attempts without using a database.
I was trying:
public int loginAttempts()
{
if (!(ViewData["loginAttempts"] == null))
{
ViewData["loginAttempts"] = int.Parse(ViewData["loginAttempts"].ToString()) + 1;
return int.Parse(ViewData["loginAttempts"].ToString());
}
else
{
ViewData["loginAttempts"] = 1;
return 1;
}
}
But it always returns 1. What modifications do I need to make?
You'd need to store this in the session, not view data. View data is sent to the view and is not persisted between requests.
public int loginAttempts()
{
if (!(Session["loginAttempts"] == null))
{
Session["loginAttempts"] = int.Parse(Session["loginAttempts"].ToString()) + 1;
return int.Parse(Session["loginAttempts"].ToString());
}
else
{
Session["loginAttempts"] = 1;
return 1;
}
}
Use Session state variables.

Resources