So I have a dictionary of dictionaries. And I want to create a new dictionary containing the exact same data but such that the order of the dictionaries is swapped. (So if I have 'years-->countries-->data about that country in that year' the new dictionary is structured 'countries-->years-->data about that country in that year' and despite studying this for a while (and many many hours of trying) I still can't see to get anywhere. I think part of the problem is I don't know where to start! Any help would be appreciated! Thanks in advance. (Doing this in python btw so help in that language would be especially appreciated!)
Related
I am having a beginner question about writing a Sqlite query that I can't seem to figure out:
So you have a table called "D", one column that is called "PersonName", one that is called "DrinkName" and one that is called "CountryName" and you are now looking too see which PersonName has had all DrinkName where CountryName='France' for example.
I have tried by using HAVING COUNT in a numerous different ways but it never works out and I didn't really know where to turn, which is why I am asking you here! Sorry if this question isn't relevant to Stackoverflow or hard to understand, it's my first time here :)
Thanks in advance!
I would really like to take the data that would normally be seen by hovering over each column at once. The graph is an interactive one so its hard to extract all the data at once. I would really like it.
I suggest that you pick a programming language that you know fairly well.
Then load the web pages, use a selector to select the desired elements, and output the data in the format you like.
Please begin writing the code, and update your question when you have something working at least partially, so you can ask precisely where you need help
Wondering what will be a good way to manage common reference data used by all nodes / specified nodes in Corda? One of the example will be a Contract Type or Legal Entity Name, which are the common reference data, shared by specified nodes.
I was thinking Oracle can be the solution, but after the study, seems Oracle is not be appropriate as we only need to get the list of reference data and can be quite frequent.
Other solution I have in mind is to have a centralized place to manage such data and can be obtained thru API. Appreciate anyone can help on this. Thanks.
Kwan
Reference data should generally be included in the form of attachments.
The theory is here, and you can read about how they work here.
I've scoured the net and cannot seem to find an appropriate example so I thought I'd ask...
(Btw, much of this is new to me- not all, just most.)
Problem: trying to convert a bio/python nested dictionary (or xml) of pubmed citation data into a flat (normalized) structure eg, sqlite. Citation data was fetched from pubmed using biopython and was parsed into a dictionary, but can also retrieve as xml if needed.
Not all citations will have all fields/keys and not all fields/keys will have the same number of items (authors, mesh terms, refs, etc...) and understand that this is part of the normalization process.
This is about where my practical understanding ends.
That said, I think the process should go something like this: first remove/normalize all unique fields (those that have 1 per paper eg, title, abstract, date, citation, etc..., but say not affiliation as that would be linked to first author). Papers with no abstract could be filled as null?
Then move on to, say, authors and create a separate table again using PMID as the fk and then do same for the various other fields/keys/items in separate tables eg, mesh headings, EC numbers, ref, etc...
Is there a way to do this that removes (pops?) keys/items from the master dictionary so that I can visually see what's been done/needs to be done (obviously leaving the PMID)?
Again, apologies in advance if I'm asking a blindingly obvious question to the initiated- and I do understand that you can't fit a nested structure into a flat space- just looking for the least boneheaded way of going about this and hopefully one that will allow me to make sure that everything was properly captured.
Many thanks,
chris
A quick question -- if you already have the data in XML, why are you normalizing it into a SQL format? Why not just use the raw XML? Berkeley DB XML is a library (like SQLite) that links into your application. There is no separate server to install or maintain. The library allows you to store and query XML data using XPath or XQuery. It's very fast, has a small footprint. is transactional, recoverable and highly reliable. It has HA features as well, if that is required.
Keeping the data in XML should simplify the whole data import process and still allow you to query the semi-structured data.
What it says on the tin: I have an XMLList, and I want to find where in it a particular XML item falls. First index is good enough for my purposes.
Note that I have no problem writing a function to do this by hand... but I was hoping that the API has something buried somewhere that'll do it for me. I didn't see it, though.
Just loop through the xmllist until you find one that matches, then return the index in your loop (for loop with an index).
Without a code example it's a bit difficult to know what you're trying to do exactly, but maybe XML.childIndex() could the solution?