How can I get all 3 values from a Map with the key "2" ?
Right now it takes always the last one from any number, for example if i type in "1" it will take the last letter of it "f".
Is it because there is no loop ?
AFAIK, it is not possible to directly store multiple values with the same key in Stencyl, nor in any programming language. Each value will overwrite the previous one, which is why your 2 key is returning i. Instead, you could consider these workarounds:
Name your keys 0a, 0b, 0c, 1a, 1b, 1c, and so on. It's pretty simple to get the values with this method.
Store your values in a single key, separated by commas. E.g. key 0 is a,b,c, 1 is d,e,f, and so on. Then you can use the list blocks with split (value of 0 for TT) using separator , to get the 3 values (which will be a, b and c).
If you have any questions about either of these methods, don't hesitate to ask!
Related
Disclaimer: This is not a database administration or design question. I did not design this database and I do not have rights to change it.
I have a database in which many fields are compound. For example, a single column is used for acre usage for a district. Many districts have one primary crop and the value is a single number, such as 14. Some have two primary crops and it has two numbers separated by a comma like "14,8". Some have three, four, or even five primary crops resulting in a compound value like "14,8,7,4,3".
I am pulling data out of this database for analytical research. Right now, I am pulling columns like that into R, splitting them into 5 values (padding nulls if there aren't 5 values), and performing work on the values. I want to do it in the database itself. I want to split the value on the comma, perform an operation on the resulting values, and then concatenate the result of the operation back into the original column format.
Example, I have a column that is in acres. I want it in square meters. So, I want to take "14,8", temporarily turn it into 14 and 8, multiply each of those by 4046.86, and get "56656.04,32374.88" as my result. What I am currently doing is using regexp_replace. I start with all rows where "acres REGEXP '^[0-9.]+,[0-9.]+,[0-9.]+,[0-9.]+$'" for the where clause. That gives me rows with 5 numbers in the field. Then, I can do the first number with "cast(regexp_replace(acres,',.*%','') as float) * 4046.86". I can do each of the 5 using a different regexp_replace. I can concatenate those values back together. Then, I run a query for those with 4 numbers, then 3, then 2, and finally the single number rows.
Is this possible as a single query?
Use a function to parse the string and to convert it to desired result. This will allow for you to use a sigle query for the job.
Is it possible to make a vector of pair of pairs of two integers?or else can you suggest a way to create a structure where there is four attributes of each member and needs to sort based on the first one , in case of equality the second and so on..
I was wondering what the best method would be to sort a dictionary of type Dict{String, Int} based on the value. I loop over a FASTQ file containing multiple sequence records, each record has a String as an identifier which serves as key and another string where i take the length from as the value of the key.
for example:
testdict["ee0a"]=length("aatcg")
testdict["002e4"]=length("aatcgtga")
testdict["12-f9"]=length(aatcgtgacgtga")
In this case the key value pairs would be "ee0a" => 5, "002e4" => 8, and "12-f9" => 13.
What i want to do is sort these pairs from highest value to the lowest value, afterwhich i sum these values in a different untill a that variable passes a certain threshold. I then need to save the keys i used so i can use them later on.
Is it possible to use the sort() function or use a SortedDict to achieve this? I would imagine that if the sorting succeeded i could use a while loop to add my keys to a list and add my values into a different variable untill it's greater than my threshold, and then use the list of keys to create a new dictionary with my selected key-value pairs.
However what would be the fastest way to do this? the FASTQ files i read in can contain multiple GB's worth of data so i'd love to create a sorted dictionary while reading in the file and select the records i want before doing anything else with the data.
If your file is multiple GB's worth of data I would avoid storing them in the Dict in the first place. I think it is better to process the file sequentially and store the keys that meet your condition in a PriorityQueue from the DataStructures.jl package. Of course you can repeat the same procedure if you read the data from a dictionary in memory (simply source changes from disk file to the dictionary)
Here is a pseudocode of what you could consider (a full solution would depend on how you read your data which you did not specify).
Assume that you want to store elements until they execeed threshold kept in THRESH constant.
pq = PriorityQueue{String, Int}()
s = 0
while (there are more key-value pairs in source file)
key, value = read(source file)
# this check avoids adding a key-value pair for which we are sure that
# it is not interesting
if s <= THRESH || value > peek(pq)[2]
enqueue!(pq, key, value)
s += value
# if we added something to the queue we have to check
# if we should not drop smallest elements from it
while s - peek(pq)[2] > THRESH
s -= dequeue!(pq)[2]
end
end
end
After this process pq will hold only key-value pairs you are interested in. The key benefit of this approach is that you never need to store whole data in RAM. At any point in time you only store the key-value pairs that would be selected at this stage of processing of the data.
Observe that this process does not give you an easily predictable result because several keys might have the same value. And if this value would be on a cutoff border you do not know which ones would be retained (however, you did not specify what you want to do in this special case - if you would specify the requirement for this case the algorithm should be updated a bit).
If you have enough memory to hold at least one or two full Dicts of the required size, you can use an inverted Dict with the length as key and an array of the old keys as values, to avoid losing data with a duplicate length value as a same key.
I think that the code below is then what your question was leading toward:
d1 = Dict("a" => 1, "b" => 2, "c" => 3, "d" => 2, "e" => 1, "f" =>5)
d2 = Dict()
for (k, v) in d1
d2[v] = haskey(d2, v) ? push!(d2[v], k) : [k]
end
println(d1)
println(d2)
for k in sort(collect(keys(d2)))
print("$k, $(d2[k]); ")
# here can delete keys under a threshold to speed further processing
end
If you don't have enough memory to hold an entire Dict, you may benefit
from first putting the data into a SQL database like SQLite and then doing
queries instead of modifying a Dict in memory. In that case, one column
of the table will be the data, and you would add a column for the data length
to the SQLite table. Or you can use a PriorityQueue as in the answer above.
I'm making a spreadsheet to help me with my personal accounting. I'm trying to create a formula in LibreOffice Calc that will search in a given cell for a number of different text strings and if found return a text string.
For example, the formula should search for "burger" or "McDonalds" in $C6 and likewise then return "Food" to $E6. It should not be case sensitive. And needs partially to match strings as well as in the case of Burger King. I need it to be able to search for other keywords and return those values as well, like "AutoZone" and return "Auto" and NewEgg and return "Electronics".
I've had a tough time finding any kind of solution to this and the closet I could get was with a MATCH formula but once I nested it in an IF it would not work. I've also tried nested IF with OR; not joy on either.
Examples:
=IF(OR(D10="*hulu*",D10="*netflix*",D10="*movie*",D10="*theature*",D10="*stadium*",D10="*google*music*")=1,"Entertainment",IF(OR(D10="*taco*",D10="*burger*",D10="*mcdonald*",D10="*dq*",D10="*tokyo*",D10="*wendy*",D10="*cafe*",D10="*wing*",D10="*tropical*",D10="*kfc*",D10="*olive*",D10="*caesar*",D10="*costa*vida*",D10="*Carl*",D10="*in*n*out*",D10="*golden*corral*",D10="*nija*",D10="*arby*",D10="*Domino*",D10="*Subway*",D10="*Iggy*",D10="*Pizza*Hut*",D10="*Rumbi*",D10="*Custard*",D10="*Jimmy*")=1,"Food",IF(OR(D10="*autozone*",D10="*Napa*",D10="*OREILLY*")=1,"AUTO","-")))
I can create a different table and make a lookup reference so another way to put this is I need something that does the opposite of what VLOOKUP and HLOOKUP do and return the header value for any data matching in given columns.
Something like:
=IF(NOT(ISNA(MATCH(A1,B3:B99))),B2,IF(NOT(ISNA(MATCH(A1,C3:C99))),c2,0))
If A1 was the test and B2 and C2 were the headers and it was searching below those.
As per my comments, try this:
=IF(SUM(LEN(G150)-LEN(SUBSTITUTE(LOWER(G150),{"hulu","netflix","movie","theater"," stadium"},"")))>0,"Entertainment",IF(SUM(LEN(G150)-LEN(SUBSTITUTE(LOWER(G150),{"burger","taco","vida","cafe","wing","dairy","mcdonald","wendy","kfc","pizza","carl","domino","ceaser","olive","jimmy","custard","subway","arby"},"")))>0,"Food",IF(SUM(LEN(G150)-LEN(SUBSTITUTE(LOWER(G150),{"autozone","Napa","oreilly"},"")))>0,"AUTO","-")))
It is an Array formula and must be confirmed with Ctrl-Shift-Enter.
You can do this various ways using INDEX/MATCH/VLOOKUP formulae. Just a couple of caveats: I am using Excel, and never used Libre so hope this works; and, you will need a mapping table that maps MacDonalds to Food, Google Music to Entertainment and so on (for all the cases possible).
Let's assume your mapping table in your screenshot is A6 to E9.
The formula in E10 =vlookup(C10,$C$6:$E$9,3,0)
Explanation: it looks up C10 (Burger King) in the table $C$6:$E$9 and result is the 3rd column (E is 3rd column from C, where C10 was looked up) in that table. The 0 will give you an exact match, if you want a partial match then enter 1 there.
Note: if your mapping table is in say columns G and H (Service name in G and Type of Service in H), AND you are unsure how many entries it will have, a mod to the formula is =vlookup(C10,$G:$H,2,0) OR =vlookup(C10,$G:$H,2,1) for a partial match. Here, 3 is replaced by 2 because H is the 2nd column from G where C10 will be looked up.
EDIT: Doing VLOOKUP with INDEX and MATCH functions for an approximate match of text - this could be the solution you are looking at in your last comment(?)
Two things needed to be done. a.Reference table entries, b.applying the INDEX/MATCH function.
Part a - in your reference table, you will have to make entries between 2*s for the value to be looked up. The way you mention in your example in the Qn *movie*,*wendy*,etc. That's really the trick that enables us to lookup by cell reference. Corresponding return values like Entertainment/Food/etc need to be their own full words. Let's assume you have this table prepared in columns G6:H26 (G-lookup value, H-return value)
Part b - In you cell F6 (as per your screenshot), you can try this formula =INDEX($H$6:$H$26,MATCH(C6,$G$6:$G$26,0))
That really just is the replacement formula for VLOOKUP using INDEX/MATCH.
As your values stored in column G are in *s, the cell C6 in the MATCH formula will do a partial read.
I have a problem whereby I have several discrete lists of ID's eg.
List (A) 1,2,3,4,5,7,8
List (B) 2,3,4,5
List (C) 4,2,8,9,1
etc...
I then have another collection of ID's...
For example: 1,2,4
I need to try and match one into each list. If I can perfectly match all ID's in my secondary collection (one collection ID matched with an ID from each list) then I get a true result....
I have found that it becomes complicated because if you simply iterate over the lists matching the first collection/list pair that you encounter it may result in you precluding a possible combination further on down the line hence returning a false negative result.
For example:
List (A) 1,2,3,4
List (B) 1,2,3,4
List (C) 3,4
Collection is: 3,1,2
The first ID from the collection (3) matches with an entry in list A, the second ID in the collection (1) matches an item in list B, however the final ID in the collection (2) DOESNT match any entry in list C however if you rearrange the order of the collection to be: 2,1,3 then a match is found.... Therefore I am looking for some form of logic for attempting a match on all possible combinations in an efficient manner(?)
To make it more complicated the ID's are actually GUID's so cant just be sorted in ascending order
I hope I have described this well enough to make it clear what I am attempting and with a bit of luck somebody will be able to tell me that what I need to do is very easy and I am missing something real simple!
I am forced to code this in VB6 but any methods or pseudo code would be great. The backend of this is SQL server so if a solution using TSQL was possible this would be even better as all of the ID's are held in tables already.
Many thanks in advance.
Jake, yep the lists and the collection both contain GUIDS. I used plain integers to simplify the problem a bit.
Once a list has been matched it cant be searched again, hence the ordering problem that I tried to explain. If you say that a list as 'matched' then no further attempts to match this will be performed. It is this very behaviour that can cause a false negative.
'Sending' the collection in in every possible combination of orders would work but would be a massive job .....
I feel I must be missing a really straightforward concept or solution here??!!
Thanks for your assistance so far.
I don't see a way around checking each GUID contained in the lists against each GUID in the collection. You would have to keep record of in which lists each GUID in the collection occurs.
To use your example of the Collection (3, 1, 2), 3 occurs in List A, B and C.
You will basically be left with this dataset.
3 (A, B, C)
1 (A, B)
2 (A, B)
Once you have distilled it down to this dataset you can determine whether there are any GUIDs with zero occurrences in the lists which would result in a negative.
I am not at all well versed in algorithms, but this is how I would proceed after that :
Start with the first set (A, B, C), and check how many times it occurs further on in the dataset. In this case no occurrences are found.
Moving on to the next set (A, B), if the number of occurrences of this set is found to be greater than the length of this set, i.e. more than two occurrences, would result in a negative. If the number of occurrences match the length exactly, as is the case here, the set (A, B) can be removed from any further consideration.
3 (C)
1 ()
2 ()
I guess you would continue to repeat the process until a negative is identified or all the occurrences have been excluded. There is probably a recognized algorithm for this sort of problem, but my knowledge is a bit lacking in that respect. :(