How to change the chunksize and Retrieve specific chunk in Mongodb Gridfs? - gridfs

I am pretty much new in Mongodb now what I want to do is to insert a pdf file of 3MB using JAVA driver and want change the chunk size from 256 to 1mb and then want to retrieve the second chunk say 2nd page of the pdf document.
How can I do so.
Thankyou.

Generally, once a document has been written into GridFS you will need to re-write it (delete and save again) to modify the chunk size.
Since GridFS does not know anything about the format of the data in the file it can not help you get to the "2nd page". The InputStream implementation that is returned from GridFSDBFile does avoid reading blocks when you use the skip(long) method. If you know that the "2nd page" is N bytes into the file then you can skip that many bytes in the stream and start reading.
HTH, Rob
P.S. Remember that skip(long) returns the number of bytes actually skipped. You should not assume that skip(12) always skips 12 bytes.
P.P.S Starting to read from the middle of a PDF and making sense of what is there is going to be hard unless you have preserved state from the previous page(s).

Related

What is the "buffer" in the Atom editor?

In describing how find (& find and replace) work, the Atom Flight Manual refers to the buffer as one scope of search, with the entire project being another scope. What is the buffer? It seems like it would be the current file, but I expect it is more than that.
From the Atom Flight Manual:
A buffer is the text content of a file in Atom. It's basically the same as a file for most descriptions, but it's the version Atom has in memory. For instance, you can change the text of a buffer and it isn't written to its associated file until you save it.
Also came across this from The Craft of Text Editing by Craig Finseth, Chapter 6:
A buffer is the basic unit of text being edited. It can be any size, from zero characters to the largest item that can be manipulated on the computer system. This limit on size is usually set by such factors as address space, amount of real and/or virtual memory, and mass storage capacity. A buffer can exist by itself, or it can be associated with at most one file. When associated with a file, the buffer is a copy of the contents of the file at a specific time. A file, on the other hand, can be associated with any number of buffers, each one being a copy of that file's contents at the same or at different times.

data deletion with concurrent reading/writing in file system

Most file systems use locking to handle concurrent read/write. But what if after a read call, a write call is executed which deletes the data preceding the previous read call.
Is the pointer for a file open for reading updated to reflect the new start of the now smaller file?
The question isn't really valid, because you can't delete data using the write system call. You can overwrite data using the write(2) system call, but you can't delete data. Now, you can truncate the file using the truncate(2) system call. This changes the size of the file (reported via the st_size field by the stat(2) system call), and any bytes after the end of the file as reported by changed st_size will be zero. You can increase the size of the file using the truncate system call by requesting a new size which is larger than the current size. It is undefined (per the POSIX specification) whether this is allowed, or what the system will do when it recieves a truncate larger than the current size of the file. On many file systems it will simply set the size of the file to requested size.
OK, a few more concepts. Associated with each open file structure is a file offset pointer. Attempts to read or write a file using the read(2) or write(2) system call will advance the offset pointer by the number of bytes read or written. If you open a file twice using the open(2) system call, you will get two file descriptors, which each refer to a different open file structure, and in that case, a read(2) or write(2) using one file descriptor will not change the file offset for the other file descriptor. (If you clone a file descriptor using the dup(2) system call, then then you will get a second file descriptor which points to the same file structure, and then changes made to the file structure via one file descriptor, using the read(2), write(2), or lseek(2) system calls will be reflected via the cloned file descriptor. But that's a side issue, so that's all I will say on this topic for now.)
Now, if you truncate the file, this doesn't change the file offset in the file descriptor. However, any bytes after the truncated size will be zero if read. So the answer is that file offset pointer won't be updated after the truncate, but an attempt to read beyond the truncated size of the file will return all zeros.

I want to prevent copying file from memory card

I want to prevent my file to copy from any devices. Means that i have memory card and when i insert it in any device like android or computer than my file can't copy from that.
Any resources to read or any place that i can get some information about copy preventing
Maybe you could partition the SD card and leave some space unpartitioned and write some magic bytes to it. When your program executes you'll determine the device the application ran from. If this is the SD card you'll try and read the raw bytes from the SD card and compare it with the magic bytes, if the program is not ran from SD card or if the magic bytes do not match it does not execute. Done!
Please don't get me wrong, this won't be easy, but maybe it could work. Copying would still work, but the file will be useless. Also, this is not a ready to made solution, but rather an outline how you could achieve your goal.
For accessing SD-Cards raw data please see
http://www.codeproject.com/Articles/28314/Reading-and-Writing-to-Raw-Disk-Sectors
And for partitioning http://geeks.lockergnome.com/profiles/blogs/how-to-partition-an-sd-card

Is there any size/row limit in .txt file?

The question may looks duplicate. But i am not getting the answer which i am looking.
The problem is, in unix, one of the 4GL binary is fetching data from the table using cursor and writing the data in .txt file.
The table contains around 50 Million records.
The binary took lot of time and not completing. the .txt file is also 0 byte.
I want to know the possibilities why the records are not written in the .txt file.
Note: There is enough disk space available.
Also, for 30 Million records, i can get the data in the .txt file as i expected.
The information you provide is insufficient to tell for sure why the file is not written.
In UNIX, a text file is just like any another file - a collection of bytes. No specific limit (or structure) is enforced on "row size" or "row count," although obviously, some programs might have certain limits on maximum supported line sizes and such (depending on their implementation).
When a program starts writing data to a file (i.e. once the internal buffer is flushed for the first time) the file will no longer be zero size, so clearly your binqary is doing something else all that time (unless it wipes out the file as part of the cleanup).
Try running your executable via strace to see the file I/O activity - that would give some clues as to what is going on.
Try closing the writer if you are using one to write to the file. It achieves the dual purpose of closing the resource along with flushing the remaining contents of the buffer.
CPU calculated output needs to be flushed if you are using any mechanism of buffered writer. I have encountered such situations a few times and in almost all cases, the issue was that of flushing the output.
In java specifically, usually the best practice of writing data involves buffers. So when the buffer limit is reached, it gets written to the file but doesn't get written to the file when the end of buffer has not been reached yet. This happens when program closes without flushing the buffered writer.
So, in your case, if the processing time that it takes is reasonable and still the output is not on the file, it may mean that the output has been calculated and put on the RAM but could not be written to the file (which represents the disk) due to the output not being flushed.
You can also consider the answers to this question.

implementing a download manager that supports resuming

I intend on writing a small download manager in C++ that supports resuming (and multiple connections per download).
From the info I gathered so far, when sending the http request I need to add a header field with a key of "Range" and the value "bytes=startoff-endoff". Then the server returns a http response with the data between those offsets.
So roughly what I have in mind is to split the file to the number of allowed connections per file and send a http request per splitted part with the appropriate "Range". So if I have a 4mb file and 4 allowed connections, I'd split the file to 4 and have 4 http requests going, each with the appropriate "Range" field. Implementing the resume feature would involve remembering which offsets are already downloaded and simply not request those.
Is this the right way to do this?
What if the web server doesn't support resuming? (my guess is it will ignore the "Range" and just send the entire file)
When sending the http requests, should I specify in the range the entire splitted size? Or maybe ask smaller pieces, say 1024k per request?
When reading the data, should I write it immediately to the file or do some kind of buffering? I guess it could be wasteful to write small chunks.
Should I use a memory mapped file? If I remember correctly, it's recommended for frequent reads rather than writes (I could be wrong). Is it memory wise? What if I have several downloads simultaneously?
If I'm not using a memory mapped file, should I open the file per allowed connection? Or when needing to write to the file simply seek? (if I did use a memory mapped file this would be really easy, since I could simply have several pointers).
Note: I'll probably be using Qt, but this is a general question so I left code out of it.
Regarding the request/response:
for a Range-d request, you could get three different responses:
206 Partial Content - resuming supported and possible; check Content-Range header for size/range of response
200 OK - byte ranges ("resuming") not supported, whole resource ("file") follows
416 Requested Range Not Satisfiable - incorrect range (past EOF etc.)
Content-Range usu. looks like this: Content-Range: bytes 21010-47000/47022, that is bytes start-end/total.
Check the HTTP spec for details, esp. sections 14.5, 14.16 and 14.35
I am not an expert on C++, however, I had once done a .net application which needed similar functionality (download scheduling, resume support, prioritizing downloads)
i used microsoft bits (Background Intelligent Transfer Service) component - which has been developed in c. windows update uses BITS too. I went for this solution because I don't think I am a good enough a programmer to write something of this level myself ;-)
Although I am not sure if you can get the code of BITS - I do think you should just have a look at its documentation which might help you understand how they implemented it, the architecture, interfaces, etc.
Here it is - http://msdn.microsoft.com/en-us/library/aa362708(VS.85).aspx
I can't answer all your questions, but here is my take on two of them.
Chunk size
There are two things you should consider about chunk size:
The smaller they are the more overhead you get form sending the HTTP request.
With larger chunks you run the risk of re-downloading the same data twice, if one download fails.
I'd recommend you go with smaller chunks of data. You'll have to do some test to see what size is best for your purpose though.
In memory vs. files
You should write the data chunks to in memory buffer, and then when it is full write it to the disk. If you are going to download large files, it can be troublesome for your users, if they run out of RAM. If I remember correctly the IIS stores requests smaller than 256kb in memory, anything larger will be written to the disk, you may want to consider a simmilar approach.
Besides keeping track of what were the offsets marking the beginning of your segments and each segment length (unless you want to compute that upon resume, which would involve sort the offset list and calculate the distance between two of them) you will want to check the Accept-Ranges header of the HTTP response sent by the server to make sure it supports the usage of the Range header. The best way to specify the range is "Range: bytes=START_BYTE-END_BYTE" and the range you request includes both START_BYTE and byte END_BYTE, thus consisting of (END_BYTE-START_BYTE)+1 bytes.
Requesting micro chunks is something I'd advise against as you might be blacklisted by a firewall rule to block HTTP flood. In general, I'd suggest you don't make chunks smaller than 1MB and don't make more than 10 chunks.
Depending on what control you plan to have on your download, if you've got socket-level control you can consider writing only once every 32K at least, or writing data asynchronously.
I couldn't comment on the MMF idea, but if the downloaded file is large that's not going to be a good idea as you'll eat up a lot of RAM and eventually even cause the system to swap, which is not efficient.
About handling the chunks, you could just create several files - one per segment, optionally preallocate the disk space filling up the file with as many \x00 as the size of the chunk (preallocating might save you sometime while you write during the download, but will make starting the download slower), and then finally just write all of the chunks sequentially into the final file.
One thing you should beware of is that several servers have a max. concurrent connections limit, and you don't get to know it in advance, so you should be prepared to handle http errors/timeouts and to change the size of the chunks or to create a queue of the chunks in case you created more chunks than max. connections.
Not really an answer to the original questions, but another thing worth mentioning is that a resumable downloader should also check the last modified date on a resource before trying to grab the next chunk of something that may have changed.
It seems to me you would want to limit the size per download chunk. Large chunks could force you to repeat download of data if the connection aborted close to the end of the data part. Specially an issue with slower connections.
for the pause resume support look at this simple example
Simple download manager in Qt with puase/ resume support

Resources