Why does facebook's conversion pixel load multiple JavaScript files? - api-design

If I visit a website with the facebook conversion pixel installed (such as https://www.walmart.com/), I notice that several different JavaScript files are loaded by the pixel.
The first one is https://connect.facebook.net/en_US/fbevents.js.
The second one is
https://connect.facebook.net/signals/config/168539446845503?v=2.9.2&r=stable. This one seems to have some user specific configuration data baked into the file.
The third one is https://connect.facebook.net/signals/plugins/inferredEvents.js?v=2.9.2
What I don't understand is, why doesn't Facebook simply consolidate all of these into one request, like https://connect.facebook.net/en_US/168539446845503/fbevents.js?v=2.9.2&r=stable, and then simply return one file with everything in it? This would be able to do everything the conversion pixel does now, but with 1 request instead of 3.

As the page makes more than a hundred requests for its loading, loading 1 javascript file instead of 3 would not be a significant improvement.
Facebook chose to divide in 3 files for a better design, probably :
1 generic library : fbevents.js
1 more specific : inferredEvents.js, that uses the first one
1 file that contains generated code, probably specific to the merchant 168539446845503 (Walmart?)
This fragmentation makes code maintenance easier (test, reusability, bug fix).
And finally, the generic files fbevents.js and inferredEvents.js can be cached by the browser and reused on other web sites. This is a kind of optimization, possibly better than the one you suggest.

Having multiple resource requests to the same origin is FAR FAR less of an issue than it was a few years ago:
Internet speeds and are much faster.
Latency is less (most notably so on 5G phones).
HTTP/3 protocol has many improvements which help when multiplexing files simultaneously from the same server.
Browsers don't limit active number of connections to a site as agressively they used to (that doesn't matter with HTTP/3 anyway).
Facebook uses HTTP/3 as you can see here:

Related

Different TTFB value on Chrome vs Web Vitals

I am noticing different TTBF values in Chrome network tab vs logged by WebVitals. Ideally it should be exactly same value, but sometimes seeing large difference as much as 2-3 seconds for certain scenarios.
I am using Next.js and using reportWebVitals to log respective performance metrics.
Here is a sample repo, app url and screenshots for reference.
Using performance.timing.responseStart - performance.timing.requestStart is returning more appropriate value than relying on WebVitals TTFB value.
Any idea what could be going wrong? Is is a bug on WebVitals and I shouldn't be using it or mistake at my end in consuming/logging the values?
The number provided by reportWebVitals (and the underlying library web-vitals) is generally considered the correct TTFB in the web performance community (though to be fair, there are some differences in implementation across tools).
I believe DevTools labels that smaller number "Waiting (TTFB)" as an informal hint to the user what that "waiting" is to give it context and because it usually is the large majority of the TTFB time.
However, from a user-centric perspective, time-to-first-byte should really include all the time from when the user starts navigating to a page to when the server responds with the first byte of that page--which will include time for DNS resolution, connection negotiation, redirects (if any), etc. DevTools does include at least some information about that extra time in that screenshot, just separated into various periods above the ostensible TTFB number (see the "Queueing", "Stalled", and "Request Sent" entries).
Generally the Resource Timing spec can be used as the source of truth for talking about web performance. It places time 0 as the start of navigation:
Throughout this work, all time values are measured in milliseconds since the start of navigation of the document [HR-TIME-2]. For example, the start of navigation of the document occurs at time 0.
And then defines responseStart as
The time immediately after the user agent's HTTP parser receives the first byte of the response
So performance.timing.responseStart - performance.timing.navigationStart by itself is the browser's measure of TTFB (or performance.getEntriesByType('navigation')[0].responseStart in the newer Navigation Timing Level 2 API), and that's the number web-vitals uses for TTFB as well.

How to send chunks of video for streaming using HTTP protocol?

I am creating an app which uses sockets to send data to other devices. I am using Http protocol to send and receive data. Now the problem is, i have to stream a video and i don't know how to send a video(or stream a video).
If the user directly jump to the middle of video then how should i send data.
Thanks...
HTTP wasn't really designed with streaming in mind. Honestly the best protocol is something UDP-based (SCTP is even better in some ways, but support is sketchy). However, I appreciate you may be constrained to HTTP so I'll answer your question as written.
I should also point out that streaming video is actually quite a deep topic and all I can do here is try to touch on some of the approaches that you might want to investigate. If you have control of the end-to-end solution then you have some choices to make - if you only control one end, then your choices are more or less dictated by what's available at the other end.
If you only want to play from the start of the file then it's fairly straightforward - make a standard HTTP request and just start playing as soon as you've buffered up enough video that you can finish downloading the file before you catch up with your download rate. You don't need any special server support for this and any video format will work.
Seeking is trickier. You could take the approach that sites like YouTube used to take which is to simply not allow the user to seek until the file has downloaded enough to reach that point in the video (or just leave them looking at a spinner until that point is reached). This is not the user experience that most people will expect these days, however.
To do better you need to be in control of the streaming client. I would suggest treating the file in chunks and making byte range requests for one chunk at a time. When the user seeks into the middle of the file, you can work out the byte offset into the file and start making byte range requests from that point.
If the video format contains some sort of index at the start then you can use this to work out file offsets - so, your video client would have to request at least enough to get the index before doing any seeking.
If the format doesn't have any form of index but it's encoded at a constant bit rate (CBR) then you can do an initial HEAD request and look at the Content-Length header to find the size of the file. Then, if the use seeks 40% of the way through the video, for example, you just seek to 40% of the way through the encoded frames. This relies on knowing enough about the file format that you can calculate an appropriate seek point so that you can identify framing information and the like (or at least an encoding format which allows you to resynchonise with both the audio and video streams even if you jump in at an arbitrary point in the file). This approach might also work with variable bit rate (VBR) as long as the format is such that you can recover from an arbitrary seek.
It's not ideal but as I said, HTTP wasn't really designed for streaming.
If you have control of the file format and the server, you could make life easier by making each chunk a separate resource. This is how Apple HTTP live streaming and Microsoft smooth streaming both work. They need tool support to pre-process the video, and I don't know if you have control of the server end. Might be worth looking into, however. These also do more clever tricks such as allowing a client to switch between multiple versions of the stream encoded at different bit rates to cope with differences in bandwidth.

Asp.net guaranteed response time

Does anybody have any hints as to how to approach writing an ASP.net app that needs to have a guaranteed response time?
When under high load that would normally cause us to exceed our desired response time, we want to throw out an appropriate number of requests, so that the rest of the requests can return before the max response time. Throwing out requests based on exceeding a fixed req/s is not viable, as there are other external factors that will control response time that cause the max rps we can safely support to fiarly drastically drift and fluctuate over time.
Its ok if a few requests take a little too long, but we'd like the great majority of them to meet the required response time window. We want to "throw out" the minimal or near minimal number of requests so that we can process the rest of the requests in the allotted response time.
It should account for ASP.Net queuing time, ideally the network request time but that is less important.
We'd also love to do adaptive work, like make a db call if we have plenty of time, but do some computations if we're shorter on time.
Thanks!
SLAs with a guaranteed response time require a bit of work.
First off you need to spend a lot of time profiling your application. You want to understand exactly how it behaves under various load scenarios: light, medium, heavy, crushing.. When doing this profiling step it is going to be critical that it's done on the exact same hardware / software configuration that production uses. Results from one set of hardware have no bearing on results from an even slightly different set of hardware. This isn't just about the servers either; I'm talking routers, switches, cable lengths, hard drives (make/model), everything. Even BIOS revisions on the machines, RAID controllers and any other device in the loop.
While profiling make sure the types of work loads represent an actual slice of what you are going to see. Obviously there are certain load mixes which will execute faster than others.
I'm not entirely sure what you mean by "throw out an appropriate number of requests". That sounds like you want to drop those requests... which sounds wrong on a number of levels. Doing this usually kills an SLA as being an "outage".
Next, you are going to have to actively monitor your servers for load. If load levels get within a certain percentage of your max then you need to add more hardware to increase capacity.
Another thing, monitoring result times internally is only part of it. You'll need to monitor them from various external locations as well depending on where your clients are.
And that's just about your application. There are other forces at work such as your connection to the Internet. You will need multiple providers with active failover in case one goes down... Or, if possible, go with a solid cloud provider.
Yes, in the last mvcConf one of the speakers compares the performance of various view engines for ASP.NET MVC. I think it was Steven Smith's presentation that did the comparison, but I'm not 100% sure.
You have to keep in mind, however, that ASP.NET will really only play a very minor role in the performance of your app; DB is likely to be your biggest bottle neck.
Hope the video helps.

Get mp3 total track time using either javascript or ASP.NET

I am using the below jQuery plugin for playing mp3
www.happyworm.com/jquery/jplayer
However, there is a bug in Flash that the total play (track) time won't show up correctly UNTIL AFTER the whole mp3 is completed downloaded.
I wonder if there is a way to work around this to get the correct total time using either javascript / another flash / even backend library in ASP.NET. Any suggestion helps. Thanks
You sure that's a bug? Looking at the header definition for the MP3 format I don't see any values for the length of the file. Generally applications that play MP3s would have to calculate the time, and that may not be doable until the entire file is downloaded. So the behavior you're seeing from Flash might be expected.
Theoretically if it's a fixed bitrate file (as opposed to VBR) then knowing the bitrate (gotten from the header) and the total size of the file should be enough to calculate it. However, the server would have to report the size of the file in the response headers (and that's not guaranteed to be accurate).
My guess is you'd need some service on the server that could calculate the length and report that to you in a separate request.

implementing a download manager that supports resuming

I intend on writing a small download manager in C++ that supports resuming (and multiple connections per download).
From the info I gathered so far, when sending the http request I need to add a header field with a key of "Range" and the value "bytes=startoff-endoff". Then the server returns a http response with the data between those offsets.
So roughly what I have in mind is to split the file to the number of allowed connections per file and send a http request per splitted part with the appropriate "Range". So if I have a 4mb file and 4 allowed connections, I'd split the file to 4 and have 4 http requests going, each with the appropriate "Range" field. Implementing the resume feature would involve remembering which offsets are already downloaded and simply not request those.
Is this the right way to do this?
What if the web server doesn't support resuming? (my guess is it will ignore the "Range" and just send the entire file)
When sending the http requests, should I specify in the range the entire splitted size? Or maybe ask smaller pieces, say 1024k per request?
When reading the data, should I write it immediately to the file or do some kind of buffering? I guess it could be wasteful to write small chunks.
Should I use a memory mapped file? If I remember correctly, it's recommended for frequent reads rather than writes (I could be wrong). Is it memory wise? What if I have several downloads simultaneously?
If I'm not using a memory mapped file, should I open the file per allowed connection? Or when needing to write to the file simply seek? (if I did use a memory mapped file this would be really easy, since I could simply have several pointers).
Note: I'll probably be using Qt, but this is a general question so I left code out of it.
Regarding the request/response:
for a Range-d request, you could get three different responses:
206 Partial Content - resuming supported and possible; check Content-Range header for size/range of response
200 OK - byte ranges ("resuming") not supported, whole resource ("file") follows
416 Requested Range Not Satisfiable - incorrect range (past EOF etc.)
Content-Range usu. looks like this: Content-Range: bytes 21010-47000/47022, that is bytes start-end/total.
Check the HTTP spec for details, esp. sections 14.5, 14.16 and 14.35
I am not an expert on C++, however, I had once done a .net application which needed similar functionality (download scheduling, resume support, prioritizing downloads)
i used microsoft bits (Background Intelligent Transfer Service) component - which has been developed in c. windows update uses BITS too. I went for this solution because I don't think I am a good enough a programmer to write something of this level myself ;-)
Although I am not sure if you can get the code of BITS - I do think you should just have a look at its documentation which might help you understand how they implemented it, the architecture, interfaces, etc.
Here it is - http://msdn.microsoft.com/en-us/library/aa362708(VS.85).aspx
I can't answer all your questions, but here is my take on two of them.
Chunk size
There are two things you should consider about chunk size:
The smaller they are the more overhead you get form sending the HTTP request.
With larger chunks you run the risk of re-downloading the same data twice, if one download fails.
I'd recommend you go with smaller chunks of data. You'll have to do some test to see what size is best for your purpose though.
In memory vs. files
You should write the data chunks to in memory buffer, and then when it is full write it to the disk. If you are going to download large files, it can be troublesome for your users, if they run out of RAM. If I remember correctly the IIS stores requests smaller than 256kb in memory, anything larger will be written to the disk, you may want to consider a simmilar approach.
Besides keeping track of what were the offsets marking the beginning of your segments and each segment length (unless you want to compute that upon resume, which would involve sort the offset list and calculate the distance between two of them) you will want to check the Accept-Ranges header of the HTTP response sent by the server to make sure it supports the usage of the Range header. The best way to specify the range is "Range: bytes=START_BYTE-END_BYTE" and the range you request includes both START_BYTE and byte END_BYTE, thus consisting of (END_BYTE-START_BYTE)+1 bytes.
Requesting micro chunks is something I'd advise against as you might be blacklisted by a firewall rule to block HTTP flood. In general, I'd suggest you don't make chunks smaller than 1MB and don't make more than 10 chunks.
Depending on what control you plan to have on your download, if you've got socket-level control you can consider writing only once every 32K at least, or writing data asynchronously.
I couldn't comment on the MMF idea, but if the downloaded file is large that's not going to be a good idea as you'll eat up a lot of RAM and eventually even cause the system to swap, which is not efficient.
About handling the chunks, you could just create several files - one per segment, optionally preallocate the disk space filling up the file with as many \x00 as the size of the chunk (preallocating might save you sometime while you write during the download, but will make starting the download slower), and then finally just write all of the chunks sequentially into the final file.
One thing you should beware of is that several servers have a max. concurrent connections limit, and you don't get to know it in advance, so you should be prepared to handle http errors/timeouts and to change the size of the chunks or to create a queue of the chunks in case you created more chunks than max. connections.
Not really an answer to the original questions, but another thing worth mentioning is that a resumable downloader should also check the last modified date on a resource before trying to grab the next chunk of something that may have changed.
It seems to me you would want to limit the size per download chunk. Large chunks could force you to repeat download of data if the connection aborted close to the end of the data part. Specially an issue with slower connections.
for the pause resume support look at this simple example
Simple download manager in Qt with puase/ resume support

Resources