I have been trying to figure this for over 3 days now. It is part of my fortnightly assignment for my uni. However, I have hit a road block. Any help will be must appreciated.
Specifics of the assignment, I have figured out so far:
I must use an Item class that defines (get and set) the weight and
value of the stuff
Table arrayList holds the value and weight of
item array stuffList
Check if the maximum value stuff (Greedy) fits
inside the bag's capacity and add the index of this stuff from the
table arrayList in the bag arrayList
Return bag arrayList.
The method I have written so far (Doesn't seem to be working):
Public static ArrayList <Integer> greedySelection (Item[] stuffList, int Capacity)
{
ArrayList<Integer> bag = new ArrayList<Integer>();
ArrayList<Integer> Table = new ArrayList<Integer>();
for(int i = 0; i < Table.size(); i++){
if(Table.get(i) < Capacity){
bag.add(i);
Table.remove(i);
}
}
return bag;
}
Your code has several problems. You haven't focused on the basic tenet of a recursive algorithm:
Check to see whether you've "hit bottom".
If so, return the simple result.
If not, then do one trivial operation and recur on the remaining task.
For this routine, you need to figure out
When you've "hit bottom": when you can't add anything else.
What is the simple result at this point? Simply return the existing bag?
To recur, you add the most valuable item (greed) you can fit into the bag and call yourself with the new state of affairs. What is that state of affairs? At the least, you need to adjust the item list, the bag contents, and the remaining capacity.
Note that your initial state has to be set in the calling program and passed into the routine.
Does this get you moving?
Related
I was going to use Hashtable but some existing answer said only LinkedHashMap preserve the insertion order. So, it seems that I can get the insertion order with the entries or keys properties.
My question is, when the map has n elements, if I want to remove the first n/2 elements, is there a better way than looping through the keys and repeatedly calling remove(key)? That is, something like this
val a = LinkedHashMap<Int, Int>();
val n = 10;
for(i in 1 .. n)
{
a[i] = i*10;
}
a.removeRange(0,n/2);
instead of
val a = LinkedHashMap<Int, Int>();
val n = 10;
for(i in 1 .. n)
{
a[i] = i*10;
}
var i = 0;
var keysToRemove= ArrayList<Int>();
for(k in a.keys)
{
if(i >= n/2)
break;
else
i++
keysToRemove.add(k);
}
for(k in keysToRemove)
{
a.remove(k);
}
The purpose of this is that I use the map as a cache, and when the cache is full, I want to purge the oldest half of the entries. I do not have to use LinkedHashMap as long as I can:
Find the value using a key, efficiently.
Remove a range of entries at once.
There's no method in the class that makes this possible. The source code doesn't have any operations for ranges of keys or entries. Since the linking is built on top of the HashMap logic, individual entries still have to be individuatlly found by a hashed key lookup to remove them, so being able to remove a range couldn't be done faster in a LinkedHashMap, which is unlike the analogy of a LinkedList to an ArrayList.
For simpler code that's equivalent to what you're doing:
a.keys.take(a.size / 2).forEach(a::remove)
If you don't want to use a library for a cache set, LinkedHashSet is designed so you can easily build your own by subclassing. For instance, a basic one that simply removes the oldest entry when you add elements above a certain collection size:
class CacheHashMap<K, V>(private var maxSize: Int): LinkedHashMap<K, V>() {
override fun removeEldestEntry(eldest: MutableMap.MutableEntry<K, V>?): Boolean =
size == maxSize
}
Also, if you set accessOrder to true in your constructor call, it orders by last used to most recently used entry, which might be more apt for your situation than insertion order.
EDIT: sorry I missed the part about using this as an LRU cache, for that use case, TreeMap will not be suitable.
If insertion order is just incidental for you, and what you want is in fact the actual order of comparable keys, you should use a TreeMap instead.
However, the specific use case of removing half the keys might not be supported directly. You will rather find methods to remove keys below/above a certain value, and get the highest/lowest keys.
I've a buffer that is actually ArrayList<Object>.
Happens async:
This buffer list changes very frequently - I mean 15-50 times in single second and the idea is that whenever there's an update, I remove first element by position buffer.removeAt(0) and add new value in the end by buffer.add(new).
At some point I call a function that goes and do calculation with buffer list. What I do is I go through the list - element by element. At some point I run into NPE as the the element has been removed async.
How to solve this NPE? I was thinking of making deep copy, but making deep copy would mean to go through the buffer list and do some data allocation, which basically means that while I do deep copy I can still run into NPE.
How problems like these are solved?
How to solve NPE?
What would be more optimized way as this is gonna consume a lot of memory?
Code:
private fun observeFrequentData() {
frequentData.observe(owner, Observer { data ->
if (accelerationData == null) return#Observer
GlobalScope.launch {
val a = data[0].toDouble()
val b = data[1].toDouble()
val c = a + b
val timestamp = System.currentTimeMillis()
val customObj = CustomObj(c, timestamp)
if (buffer.size >= 5000) {
buffer.removeAt(0)
}
buffer.add(acceleration)
}
})
}
fun getBuffer() {
val mappedData = buffer.map { it.smth } // NPE, it == null
}
If you are doing lots of removing from 0, and insert at the end. Then ArrayList is probably not the container to use.
you can consider using a LinkedList .
buffer.removeFirst();
and
buffer.add(acceleration);
also note the following comments regarding synchronization.
Note that this implementation is not synchronized. If multiple threads
access a linked list concurrently, and at least one of the threads
modifies the list structurally, it must be synchronized externally. (A
structural modification is any operation that adds or deletes one or
more elements; merely setting the value of an element is not a
structural modification.) This is typically accomplished by
synchronizing on some object that naturally encapsulates the list. If
no such object exists, the list should be "wrapped" using the
Collections.synchronizedList method. This is best done at creation
time, to prevent accidental unsynchronized access to the list:
List list = Collections.synchronizedList(new LinkedList(...));
Using the synchronized keyword on your piece of code as #patrickf suggested.
To take care of performance, instead of making the method call itself synchronized, you can just write the 3 "buffer" related lines of code (size, removeAt and add) in a synchronized block.
Something like;
.
.
.
synchronized {
if (buffer.size >= 5000) {
buffer.removeAt(0)
}
buffer.add(acceleration)
}
}
})
Hope this helps!
Starting with complex reduce sample
I have trimmed it down to a single chart and I am trying to understand how the reduce works
I have made comments in the code that were not in the example denoting what I think is happening based on how I read the docs.
function groupArrayAdd(keyfn) {
var bisect = d3.bisector(keyfn); //set the bisector value function
//elements is the group that we are reducing,item is the current item
//this is a the reduce function being supplied to the reduce call on the group runAvgGroup for add below
return function(elements, item) {
//get the position of the key value for this element in the sorted array and put it there
var pos = bisect.right(elements, keyfn(item));
elements.splice(pos, 0, item);
return elements;
};
}
function groupArrayRemove(keyfn) {
var bisect = d3.bisector(keyfn);//set the bisector value function
//elements is the group that we are reducing,item is the current item
//this is a the reduce function being supplied to the reduce call on the group runAvgGroup for remove below
return function(elements, item) {
//get the position of the key value for this element in the sorted array and splice it out
var pos = bisect.left(elements, keyfn(item));
if(keyfn(elements[pos])===keyfn(item))
elements.splice(pos, 1);
return elements;
};
}
function groupArrayInit() {
//for each key found by the key function return this array?
return []; //the result array for where the data is being inserted in sorted order?
}
I am not quite sure my perception of how this is working is quite right. Some of the magic isn't showing itself. Am I correct that elements is the group the reduce function is being called on ? also the array in groupArrayInit() how is it being indirectly populated?
Part of me feels that the functions supplied to the reduce call are really array.map functions not array.reduce functions but I just can't quite put my finger on why. having read the docs I am just not making a connection here.
Any help would be appreciated.
Also have I missed Pens/Fiddles that are created for all these examples? like this one
http://dc-js.github.io/dc.js/examples/complex-reduce.html which is where I started with this but had to download the csv and manually convert to Json.
--------------Update
I added some print statements to try to clarify how the add function is working
function groupArrayAdd(keyfn) {
var bisect = d3.bisector(keyfn); //set the bisector value function
//elements is the group that we are reducing,item is the current item
//this is a the reduce function being supplied to the reduce call on the group runAvgGroup for add below
return function(elements, item) {
console.log("---Start Elements and Item and keyfn(item)----")
console.log(elements) //elements grouped by run?
console.log(item) //not seeing the pattern on what this is on each run
console.log(keyfn(item))
console.log("---End----")
//get the position of the key value for this element in the sorted array and put it there
var pos = bisect.right(elements, keyfn(item));
elements.splice(pos, 0, item);
return elements;
};
}
and to print out the group's contents
console.log("RunAvgGroup")
console.log(runAvgGroup.top(Infinity))
which results in
Which appears to be incorrect b/c the values are not sorted by key (the run number)?
And looking at the results of the print statements doesn't seem to help either.
This looks basically right to me. The issues are just conceptual.
Crossfilter’s group.reduce is not exactly like either Array.reduce or Array.map. Group.reduce defines methods for handling adding new records to a group or removing records from a group. So it is conceptually similar to an incremental Array.reduce that supports an reversal operation. This allows filters to be applied and removed.
Group.top returns your list of groups. The value property of these groups should be the elements value that your reduce functions return. The key of the group is the value returned by your group accessor (defined in the dimension.group call that creates your group) or your dimension accessor if you didn’t define a group accessor. Reduce functions work only on the group values and do not have direct access to the group key.
So check those values in the group.top output and hopefully you’ll see the lists of elements you expect.
I have a project that uses arrays of objects that I'm thinking of moving to es6 Sets or Maps.
I need to quickly get a random item from them (obviously trivial for my current arrays). How would I do this?
Maps and Sets are not well suited for random access. They are ordered and their length is known, but they are not indexed for access by an order index. As such, to get the Nth item in a Map or Set, you have to iterate through it to find that item.
The simple way to get a random item from a Set or Map would be to get the entire list of keys/items and then select a random one.
// get random item from a Set
function getRandomItem(set) {
let items = Array.from(set);
return items[Math.floor(Math.random() * items.length)];
}
You could make a version that would work with both a Set and a Map like this:
// returns random key from Set or Map
function getRandomKey(collection) {
let keys = Array.from(collection.keys());
return keys[Math.floor(Math.random() * keys.length)];
}
This is obviously not something that would perform well with a large Set or Map since it has to iterate all the keys and build a temporary array in order to select a random one.
Since both a Map and a Set have a known size, you could also select the random index based purely on the .size property and then you could iterate through the Map or Set until you got to the desired Nth item. For large collections, that might be a bit faster and would avoid creating the temporary array of keys at the expense of a little more code, though on average it would still be proportional to the size/2 of the collection.
// returns random key from Set or Map
function getRandomKey(collection) {
let index = Math.floor(Math.random() * collection.size);
let cntr = 0;
for (let key of collection.keys()) {
if (cntr++ === index) {
return key;
}
}
}
There's a short neat ES6+ version of the answer above:
const getRandomItem = iterable => iterable.get([...iterable.keys()][Math.floor(Math.random() * iterable.size)])
Works for Maps as well as for Sets (where keys() is an alias for value() method)
This is the short answer for Sets:
const getRandomItem = set => [...set][Math.floor(Math.random()*set.size)]
I have n integers and I need a quick logic test to see that they are all different, and I don't want to compare every combination to find a match...any ideas on a nice and elegant approach?
I don't care what programming language your idea is in, I can convert!
Use a set data structure if your language supports it, you might also look at keeping a hash table of seen elements.
In python you might try
seen={}
n_already_seen=n in seen
seen[n]=n
n_already_seen will be a boolean indicating if n has already been seen.
You don't have to check every combination thanks to commutivity and transitivity; you can simply go down the list and check each entry against each entry that comes after it. For example:
bool areElementsUnique( int[] arr ) {
for( int i=0; i<arr.Length-1; i++ ) {
for( int j=i+1; j<arr.Length; j++ ) {
if( arr[i] == arr[j] ) return false;
}
}
return true;
}
Note that the inner loop doesn't start from the beginning, but from the next element (i+1).
You can use a Hash Table or a Set type of data structure that using hashing. Then you can insert all of the elements into the hashtable or hashset, and either as you insert, check if the element is already in the table/set. If for some reason you don't want to check as you go, you can just insert all the numbers and then check to see if the size of the structure is less than n. If it is less than n, there had to be repeated elements. Otherwise, they were all unique.
Here is a really compact Java solution. The time-complexity is amortized O(n) and the space complexity is also O(n).
public boolean areAllElementsUnique(int [] list)
{
Set<Integer> set = new HashSet<Integer>();
for (int number: list)
if (set.contains(number))
return false;
else
set.add(number);
return true;
}