How to create a DICOM directory faster? - directory

I am crating DICOM directories with Leadtools SDK, everything is ok when the size of the study is not very much but when the study gets larger (eg. more than 4 or 5 gigs) it takes a lot of time to create the DICOM directory.
Is there any way to make the process faster ?

In previous versions of LEADTOOLS, the time needed to create a Dicom directory grew at a large rate in proportion to the number of images.
In the latest version, the process has been greatly optimized and it is immensely faster, especially when the number of images is huge.
If your test data sets consist of a large number of images, we recommend downloading the latest version of the toolkit then contacting support#leadtools.com to obtain the latest optimized DLLs.
If the problem appears even when using a small number of images and with the latest DLLs, send full details to our support email address and we will investigate it further.

Related

What is the size limit for .r file extension size?

What is the max file size limit for .r extension file now?
I read that it has 5MB limit, is it still the same? How does that change, will it be different from OS to OS or R version to version.
Reference: RStudio maximum file size reached
I'm very new to R, can someone please help me?
Thanks
There is no documented limit for the maximum file size or R code files. In fact, R will be able to deal with anything that’s even remotely reasonable. But for what it’s worth a 5 MiB source code file is not reasonable. If you actually have such files, I strongly suggest removing the large data that’s declared inside it, and moving it to a proper data file instead: separate your code and data. Actual code will never be this big.
As for editing such a file, different code editors have different limits for the size of files they deal well with. Again, having such a big code file is plain unreasonable, so not many code editors bother catering to this use-case, and even though few editors have a hard-coded limit, interactively editing such large files may not work.

How do I speed up loading many small RDF files into Sesame?

I'm working with an RDF dataset generated as part of our data collection which consists of around 1.6M small files totalling 6.5G of text (ntriples) and around 20M triples. My problem relates to the time it's taking to load this data into a Sesame triple store running under Tomcat.
I'm currently loading it from a Python script via the HTTP api (on the same machine) using simple POST requests one file at a time and it's taking around five days to complete the load. Looking at the published benchmarks, this seems very slow and I'm wondering what method I might use to load the data more quickly.
I did think that I could write Java to connect directly to the store and so do without the HTTP overhead. However I read in an answer to another question here that concurrent access is not supported, so that doesn't look like an option.
If I were to write Java code to connect to the HTTP repository does the Sesame library do some special magic that would make the data load faster?
Would grouping the files into larger chunks help? This would cut down the HTTP overhead for sending the files. What size of chunk would be good? This blog post suggest 100,000 lines per chunk (it's cutting a larger file up but the idea would be the same).
Thanks,
Steve
If you are able to work in Java instead of Python I would recommend using the transactional support of Sesame's Repository API to your advantage - start a transaction, add several files, then commit; rinse & repeat until you've sent all files.
If that is not an option then indeed chunking the data into larger files (or larger POST request bodies - you of course do not necessarily need to physically modify your files) would help. A good chunk size would probably be around 500,000 triples in your case - it's a bit of a guess to be honest, but I think that will give you good results.
You can also cut down on overhead by using gzip compression on the POST request body (if you don't do so already).

Why version counter and not timestamps

Why does Flyway use version numbers rather than timestamps?
How is that supposed to work with larger and possibly distributed teams?
Do I have to send and e-mail to all team members announcing that I am now reserving version number xy for me?
What happens if two developers both use the same version number?
What if a lower version number is checked into version control (and executed by the build server on the integration database) after another higher number has already been checked in?
I am used to mybatis-migrations that is closely modeled after the migrations in rails (>=2.1) where timestamps are used instead of version numbers.
Right now I think timestamps make a lot more sense: I don't have to worry about version numbers and out of order migrations are easily detected.
Quite a few questions here. I'll do my best to answer them.
Flyway's versioning system is flexible. It doesn't care whether your version is called 1.0, 20120816115123 or 2012.8.16.11.51.23. You are therefore free to use timestamps if you wish.
Reserving a version number can be a simple as adding your name next to a number on a whiteboard, a sheet of paper or a wiki page.
Flyway will detect multiple migrations with the same version and report an error.
Out of order migration support is currently the #1 requested issue and will be included in the upcoming 1.8 release.

Storing and downloading Data in iOS Applications

I am a bit new to iOS Development and I was wondering if someone could point me in the right direction regarding an application I am working on.
I am currently working on an application that will be displaying product lists and categories. The list is updated on a weekly basis (one every week).
I am now trying to decide two things:
1- What's the best method of storing this data, I am looking for a way that will allow me to replace the data in the application once every week.
2- Is it going to be beneficial to use CoreData? Note that I Only have Product Category, Product and Product Information entities.
Appreciate your support.
I would use Core Data. Because I know Core Data and am used to work with it. But this is clearly very much like using a chainsaw to cut a slice of bread.
As I understand, you're not familiar with Core Data. Maybe it's not the right tool for the job considering the learning curve.
In your case I would simply use JSON files as provided by the server.
That said, if your looking in Core Data anyway, any store will do, either atomic, XML or SQLite. The first two will load the whole data set in memory and queries will be done in memory as well. SQLite provides the benefits usually associated with databases, with a slightly increased complexity. A chainsaw.
I would use Core Data. If you haven't worked with Core Data before, learn it. It's a great framework.

ASP.net out of memory help

My first question here :)
I have a report generating website. When the user clicks a button the report is generated in a different sub as a html-file and is written to a txt-file. The html-file is later converted to a PDF in a different sub.
When the report is long (200 pages), I get out of memory exception when the PDF is generated. Memory seams to be allocated by the html generation, since when I convert the html to PDF in a different webform it works perfect.
I have tried to use analysis program like ANTS, but I dont have the knowledge to sort it out.
How can I release the html generation from memory?
Thanks!
/Georg
Your memory from a good component should hopefully get cleared out - however in this case since its a fairly large document it may by OK design but max the memory out. You can
1. Try to increase the memory in IIS available to your worker process
2. http://support.microsoft.com/kb/911716
3. (you didnt specify server version so this is dependent on that) http://support.microsoft.com/kb/820108
With ANTS - there are tutorials on RedGates site discussing its usage. If its a third party component there may not be much you can do except increase the available memory or contact the vendor.

Resources