I am new in Flex Environment, specifically flex3. I've been studying it for 1 week.
I have a project which I need FTP to upload and download mp3 and pictures files.
What is the best way to get started?
If you mean creating an FTP client in Flex, it has been done already:
FlexFTP
I used this 2 years ago. It works great but only one thing is missing, and it makes it impossible to use for big files (more than 10 or 50 Mo).
In fact, sockets in Flex have a buffer you can write into, so that data will be sent. But you can not determine how much of the buffer has been sent, nor if it is empty.
So the progress of an upload or the upload completion is impossible to retrieve with flex... maliboo has made an approximation in the pl.maliboo.ftp.invokers.UploadInv class. He sends 4096 every 300 ms, and considere that it is ok.
And this will always be true, because it is the worst case, but when you upload 3Go with a good connection speed, the script will run forever, also the upload is finished.
Related
I am creating an app which uses sockets to send data to other devices. I am using Http protocol to send and receive data. Now the problem is, i have to stream a video and i don't know how to send a video(or stream a video).
If the user directly jump to the middle of video then how should i send data.
Thanks...
HTTP wasn't really designed with streaming in mind. Honestly the best protocol is something UDP-based (SCTP is even better in some ways, but support is sketchy). However, I appreciate you may be constrained to HTTP so I'll answer your question as written.
I should also point out that streaming video is actually quite a deep topic and all I can do here is try to touch on some of the approaches that you might want to investigate. If you have control of the end-to-end solution then you have some choices to make - if you only control one end, then your choices are more or less dictated by what's available at the other end.
If you only want to play from the start of the file then it's fairly straightforward - make a standard HTTP request and just start playing as soon as you've buffered up enough video that you can finish downloading the file before you catch up with your download rate. You don't need any special server support for this and any video format will work.
Seeking is trickier. You could take the approach that sites like YouTube used to take which is to simply not allow the user to seek until the file has downloaded enough to reach that point in the video (or just leave them looking at a spinner until that point is reached). This is not the user experience that most people will expect these days, however.
To do better you need to be in control of the streaming client. I would suggest treating the file in chunks and making byte range requests for one chunk at a time. When the user seeks into the middle of the file, you can work out the byte offset into the file and start making byte range requests from that point.
If the video format contains some sort of index at the start then you can use this to work out file offsets - so, your video client would have to request at least enough to get the index before doing any seeking.
If the format doesn't have any form of index but it's encoded at a constant bit rate (CBR) then you can do an initial HEAD request and look at the Content-Length header to find the size of the file. Then, if the use seeks 40% of the way through the video, for example, you just seek to 40% of the way through the encoded frames. This relies on knowing enough about the file format that you can calculate an appropriate seek point so that you can identify framing information and the like (or at least an encoding format which allows you to resynchonise with both the audio and video streams even if you jump in at an arbitrary point in the file). This approach might also work with variable bit rate (VBR) as long as the format is such that you can recover from an arbitrary seek.
It's not ideal but as I said, HTTP wasn't really designed for streaming.
If you have control of the file format and the server, you could make life easier by making each chunk a separate resource. This is how Apple HTTP live streaming and Microsoft smooth streaming both work. They need tool support to pre-process the video, and I don't know if you have control of the server end. Might be worth looking into, however. These also do more clever tricks such as allowing a client to switch between multiple versions of the stream encoded at different bit rates to cope with differences in bandwidth.
I have two questions first is the main one.
1. I was able to display date in a cics map but what i need is, i want it to be ticking i.e., it should be display everysecond updated.
2. I have a COBOL-DB2 program which automatically inserts the data from database(DB2) to a file. I want this program to be called on a timestamp basis i.e., every 1hr, 2hr, or every day.
Thank you
You can do this, but you will need to change modify traditional psuedo-conversationl approach. Instead of returning and waiting for a user event, you can start your tran after some number of seconds with your current commarea and quit. If a user event occurs in that time, you can cancel your start request, if it doesn't, you can refresh the screen timestamp and repeat.
It is kinda a pain just to get a timestamp refreshed. Doesn't make much sense to bother with unless you have a really good reason.
The DB2 stuff is plain easy. Start your tran using interval control, the same START AFTER() described above, and you can have it run hourly, or bihourly, or whatever.
I don't think that you need to modify your pseudo-conversational approach to achieve what you need. Just issue a EXEC CICS START command with a one second delay (just do this once) for a small program that just issues a Send Map (or TC Write) to the terminal facility. Ideally reserve a common area on the screen so all transactions can use a common program. At some point, when the updates are no longer required, CANCEL the START request.The way I see it, the timer update transaction will mix in nicely with you user-initiated transaction flow. If a user transaction is active when the start timer pops, the timer update program will just be delayed a little.
While this should work, you need to bear in mind that you might be driving 3,600 transactions per hour for each user. Is this feature really worth all that?
This is not possible in standard CICS using maps. The 3270 protocol does not lend itself to continually updating screens. The majority of automatic updating screens such as consoles and monitoring displays use native VTAM methods, building their own data streams.
It might be possible to do this using unformatted data, but I would not recommend it in CICS. Pseudo-conversational CICS does not have a program in control during screen display, and conversational programming is highly discouraged.
You can't really do this in CICS, which was designed for pseudo-interactive responses at best. It was designed for use on mainframes where your terminal was sent a whole page or screen, the program read the screen as received (which has some fields the user would update and if you didn't change them the terminal did not send the data back) then, the CICS transaction having taken a part of a screen containing changes, sends the response back and quits.
This makes for very efficient data entry and inquiry programs. But realize, when the program has finished processing the screen, it's quit, it's gone, and it's not even in memory any more, all the resources have been reclaimed. This allows the company to run a mainframe with 300 terminals and maybe 10 megabytes of real memory, because when the program is waiting for you to respond, it's not using any resources at all, if there are 200 people running a data entry program, they are running a re-entrant program in which all 200 of them are running the same copy of the same program and the only thing they're using is maybe 1K of writable storage per user for the part that has to read a screen or a file record and do some calculations. Think about that, 200 people are running the same program and all of them, simultaneously, are using one module that uses 20K of memory for the application - and it's the same 20K for every single one of them - and 1K each of actual read/write data.
Think about that for a moment, the first user to start that data entry program uses 20K of memory for the application, plus 1K for the writable data. Each user after that who is being processed on that program uses an additional 1K of memory, that's all. When they're sitting there looking at the terminal, all they might be using is 4 bytes in a table to tell the system there's a terminal connected. No resources are used at all.
To be able to have a screen updated on a regular basis means that something has to keep running, which is not something CICS does very well. CICS is not intended to be used for interactive processing the way a PC does because you're actually running live on the PC.
EXEC CICS ASK TIME END-EXEC to update the timestamp.
EXEC CICS SEND MAP DATA ONLY END-EXEC to update the screen.
However, using the suggested
EXEC CICS START TRANSID ('name' | namefld)
DELAY (time)
END-EXEC.
is actually the better way.
I am using the below jQuery plugin for playing mp3
www.happyworm.com/jquery/jplayer
However, there is a bug in Flash that the total play (track) time won't show up correctly UNTIL AFTER the whole mp3 is completed downloaded.
I wonder if there is a way to work around this to get the correct total time using either javascript / another flash / even backend library in ASP.NET. Any suggestion helps. Thanks
You sure that's a bug? Looking at the header definition for the MP3 format I don't see any values for the length of the file. Generally applications that play MP3s would have to calculate the time, and that may not be doable until the entire file is downloaded. So the behavior you're seeing from Flash might be expected.
Theoretically if it's a fixed bitrate file (as opposed to VBR) then knowing the bitrate (gotten from the header) and the total size of the file should be enough to calculate it. However, the server would have to report the size of the file in the response headers (and that's not guaranteed to be accurate).
My guess is you'd need some service on the server that could calculate the length and report that to you in a separate request.
I am working on a Flex application/game where a lot of UIComponents are moved around on a canvas.
I would like to "record" an flv movie of the movement on the canvas. Is there anyway this can be accomplished ?
I essentially want my users to be able to record small flv videos of their games to be uploaded on youtube.
Any ideas or suggestions about how to do this ?
There is SimpleFlvWriter (for AIR). You may modify it to get a non-AIR version. But memory management will be an issue since BitmapData will take up a lot of memory... It may be possible for a few seconds flv but definite not for several minutes.
Usually we stream things to a Flash server (eg. Flash Media Server, Red5) and let the server create the flv. But you need to find a way to convert the screen captures to NetStream. Or you may find other server side technology that can create flv from sequence of BitmapData. But in anyway it will consume a lot of bandwidth.
An alternative I can think of, is to save all the game commands(in XML, or other text format) and send it to the server. And you write a program in server-side to generate the flv from only the game commands. But it will be the most difficult solution to be implemented.
I intend on writing a small download manager in C++ that supports resuming (and multiple connections per download).
From the info I gathered so far, when sending the http request I need to add a header field with a key of "Range" and the value "bytes=startoff-endoff". Then the server returns a http response with the data between those offsets.
So roughly what I have in mind is to split the file to the number of allowed connections per file and send a http request per splitted part with the appropriate "Range". So if I have a 4mb file and 4 allowed connections, I'd split the file to 4 and have 4 http requests going, each with the appropriate "Range" field. Implementing the resume feature would involve remembering which offsets are already downloaded and simply not request those.
Is this the right way to do this?
What if the web server doesn't support resuming? (my guess is it will ignore the "Range" and just send the entire file)
When sending the http requests, should I specify in the range the entire splitted size? Or maybe ask smaller pieces, say 1024k per request?
When reading the data, should I write it immediately to the file or do some kind of buffering? I guess it could be wasteful to write small chunks.
Should I use a memory mapped file? If I remember correctly, it's recommended for frequent reads rather than writes (I could be wrong). Is it memory wise? What if I have several downloads simultaneously?
If I'm not using a memory mapped file, should I open the file per allowed connection? Or when needing to write to the file simply seek? (if I did use a memory mapped file this would be really easy, since I could simply have several pointers).
Note: I'll probably be using Qt, but this is a general question so I left code out of it.
Regarding the request/response:
for a Range-d request, you could get three different responses:
206 Partial Content - resuming supported and possible; check Content-Range header for size/range of response
200 OK - byte ranges ("resuming") not supported, whole resource ("file") follows
416 Requested Range Not Satisfiable - incorrect range (past EOF etc.)
Content-Range usu. looks like this: Content-Range: bytes 21010-47000/47022, that is bytes start-end/total.
Check the HTTP spec for details, esp. sections 14.5, 14.16 and 14.35
I am not an expert on C++, however, I had once done a .net application which needed similar functionality (download scheduling, resume support, prioritizing downloads)
i used microsoft bits (Background Intelligent Transfer Service) component - which has been developed in c. windows update uses BITS too. I went for this solution because I don't think I am a good enough a programmer to write something of this level myself ;-)
Although I am not sure if you can get the code of BITS - I do think you should just have a look at its documentation which might help you understand how they implemented it, the architecture, interfaces, etc.
Here it is - http://msdn.microsoft.com/en-us/library/aa362708(VS.85).aspx
I can't answer all your questions, but here is my take on two of them.
Chunk size
There are two things you should consider about chunk size:
The smaller they are the more overhead you get form sending the HTTP request.
With larger chunks you run the risk of re-downloading the same data twice, if one download fails.
I'd recommend you go with smaller chunks of data. You'll have to do some test to see what size is best for your purpose though.
In memory vs. files
You should write the data chunks to in memory buffer, and then when it is full write it to the disk. If you are going to download large files, it can be troublesome for your users, if they run out of RAM. If I remember correctly the IIS stores requests smaller than 256kb in memory, anything larger will be written to the disk, you may want to consider a simmilar approach.
Besides keeping track of what were the offsets marking the beginning of your segments and each segment length (unless you want to compute that upon resume, which would involve sort the offset list and calculate the distance between two of them) you will want to check the Accept-Ranges header of the HTTP response sent by the server to make sure it supports the usage of the Range header. The best way to specify the range is "Range: bytes=START_BYTE-END_BYTE" and the range you request includes both START_BYTE and byte END_BYTE, thus consisting of (END_BYTE-START_BYTE)+1 bytes.
Requesting micro chunks is something I'd advise against as you might be blacklisted by a firewall rule to block HTTP flood. In general, I'd suggest you don't make chunks smaller than 1MB and don't make more than 10 chunks.
Depending on what control you plan to have on your download, if you've got socket-level control you can consider writing only once every 32K at least, or writing data asynchronously.
I couldn't comment on the MMF idea, but if the downloaded file is large that's not going to be a good idea as you'll eat up a lot of RAM and eventually even cause the system to swap, which is not efficient.
About handling the chunks, you could just create several files - one per segment, optionally preallocate the disk space filling up the file with as many \x00 as the size of the chunk (preallocating might save you sometime while you write during the download, but will make starting the download slower), and then finally just write all of the chunks sequentially into the final file.
One thing you should beware of is that several servers have a max. concurrent connections limit, and you don't get to know it in advance, so you should be prepared to handle http errors/timeouts and to change the size of the chunks or to create a queue of the chunks in case you created more chunks than max. connections.
Not really an answer to the original questions, but another thing worth mentioning is that a resumable downloader should also check the last modified date on a resource before trying to grab the next chunk of something that may have changed.
It seems to me you would want to limit the size per download chunk. Large chunks could force you to repeat download of data if the connection aborted close to the end of the data part. Specially an issue with slower connections.
for the pause resume support look at this simple example
Simple download manager in Qt with puase/ resume support