I'm a new in dicom area. I have to create small tool, that convert dicom file to simple pdf report. This report should contains just patient info, some measurement and picture(s). In documentation I found that all information in dicom file stores based on tage, like tag1-value1, tag2-value2. Using external library, I found the way how to pull patient info and pixel data. But I stuck with measurement. I didn't find tags that I need or may be there is a different way how data stores in dicom file.
So my questions are:
Does the dicom file that comes from ultrasound machine contain ob-gyn measurements like HC (Head circumference), AC (Abdominal circumference), BPD (Biparietal diameter) and others?
In what tag/section is this information contained?
Thanks for any help or useful links to read.
If there is a DICOM Structured Reporting (SR) object coming from the ultrasound machine, the measurements will probably be stored according to the SR Template TID 5000. You should have a look at the DICOM Conformance Statement of the machine to check this out.
You can make the content of such an SR document "human-readable" by a tool like dsrdump.
The files contain the information if the sonographer entered the measurments.
Look at the documentation about Structured Reporting to see how the data is accessed. (eg - http://www.dclunie.com/pixelmed/DICOMSR.book.pdf)
Related
I'm trying to build a stock analysis spreadsheet in Google sheets by using the importXML function in conjunction with XPath (absolute) and importHTML function using tables to scrape financial data from www.morningstar.co.uk key ratios page for the corresponding companies I like to keep an eye on.
Example: https://tools.morningstar.co.uk/uk/stockreport/default.aspx?tab=10&vw=kr&SecurityToken=0P00007O1V%5D3%5D0%5DE0WWE%24%24ALL&Id=0P00007O1V&ClientFund=0&CurrencyId=BAS
=importxml(N9,"/html/body/div[2]/div[2]/form/div[4]/div/div[1]/div/div[3]/div[2]/div[2]/div/div[2]/table/tbody/tr/td[3]")
=INDEX(IMPORTHTML(N9","table",12),3,2)
N9 being the cell containing the URL to the data source
I'm mainly using Morningstar as my source data due to the overwhelming amount of free information but the links keep on breaking, either the URL has slightly changed or the XPath hierarchy altered.
I'm guessing from what I've read so far is that busy websites such as these are dynamic and change often which is why my static links are breaking.
Is anyone able to suggest a solution or confirm if CSS selectors would be a more stable / reliable method of retrieving the data.
Many thanks in advance
Tried short XPath and long XPath links ( copied from dev tool in chrome ) frequently changed URL to repair link to data source but keeps breaking shortly after and unable to retrieve any information
The below linked map contains a number of layers which I would like to be able to extract as polygons [if possible]. I've not previously done any web-scraping and realise that doing so in regards to the geographic data on this system represents a significant challenge.
Ideally I would only want to extract the data relating to the 'Shopping Local Centre' category.
Happy to try to use Phython or R to achieve such, just wondered if anyone had any ideas.....
Web scraping (using BeautifulSoup, for example) would get you the HTML objects from a webpage. You would need basic knowledge of Python for this.
Or you could avoid that by going this route:
With QGIS and Geofabrik you can gather retail location polygons and their attributes of a given area.
Use Geofabrik to download your area of interest in *.shp (shapefile) format. It looks like you're in Greater Manchester, so I navigated to the download page here (it's a 50MB file for the greater-manchester-latest-free.shp.zip).
Once you download that, open it in QGIS and you'll see in the attributes it has retail locations.
That site is using WMS to display the map (I work for the company that makes iShare) so there is no vector content for you to scrape as it works entirely with images.
The easiest way to get the data would be to ask the council to provide it, you might need to make it a freedom of information request but they should be happy to provide the data in a usable GIS format.
I have been asked by a client to see if Bing Custom Search can order results containing links to PDFs by file modification date.
I know results can be ordered by the date the content was indexed (or re-indexed), but they are concerned with the actual age of PDF files as determined by the filesystem timestamp, and want to order the results by that criteria.
I could not find anything in the Azure documentation, and personally I don't believe it is possible, but I wanted to check in with SO first.
I don't believe this specific scenario is currently supported at the time. But if this is a feature you would like to see supported in the future, you may leave your feedback on Uservoice.
Hi I am using Papaya to view DICOM images. I have a segmented set of DICOM images whose segmented structures I can view using this software called dicomplyer( http://www.dicompyler.com/).
I can see the segmented structures on that software by clicking the structure names. Is this possible using papaya? When I upload the DICOM image set, it says no pixel data found.
Or is it a problem due to the formatting of the segmented images itself?
Can someone help me ?
I think you the segmented data you are referring to is a DICOM RT Structure Set. This type of file is segmented as a series of contours in patient coordinate space and does not consist of any voxel data.
According to the papaya forum, it isn't currently supported:
http://rii.uthscsa.edu/mango/forum/viewtopic.php?f=3&t=379&sid=2a8c8f942b6fbf3f15b537b5a69a5362
So, I'm going to be making an application, but for one of the features to work, I'll need to be able to look up Definitions, Antonyms, and Synonyms for words.
The kicker here is that I'll need one that can be used for-profit, as I plan to make money from the application.
Any idea where I can find a dataset that matches my needs?
WordNet may be close to what you are looking for, and it can be used commercially.
http://wordnet.princeton.edu/
and the wiki article:
http://en.wikipedia.org/wiki/WordNet
You can use WordsAPI and buy its data set. According to the description on their website:
Purchase of the Words API data set entitles you to use the data as much as you want, for as long as you want...The only things you cannot do with the data are resell it, or use it in a service that competes directly with WordsAPI.
Also, you can download sample data (10% of the data set) for free.