Cache breaker as query parameter vs cache breaker in filename [closed] - css

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
We have come across 2 ways to do cache breaking for our CSS files.
Cache breaker passed as a query parameter:
http://your1337site.com/styles/cool.css?v=123
Cache breaker as part of the name:
http://your1337site.com/styles/123.cool.css
Which way is better? And why?
I feel that the second way is more verbose, because the file matches the name on the folder structure. Where as the first way is good if you want to share "cool.css" on other parts of the site, which don't have access to the unique name you generate each time.

Steve Souder's article Revving Filenames: don’t use querystring makes a good argument for changing the filename as the better of the two.
...a co-worker, Jacob Hoffman-Andrews, mentioned that Squid, a popular proxy, doesn’t cache resources with a querystring. This hurts performance when multiple users behind a proxy cache request the same file – rather than using the cached version everybody would have to send a request to the origin server.
As an aside, Squid 2.7 and above does cache dynamic content with the default configuration

Related

Why do we have HTTP verbs like GET, POST and PUT instead of SELECT, INSERT and Update [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
In web world using the HTTP protocol why do we have verbs like GET, POST and PUT instead of traditionally used words in computer science vocabulary like SELECT, INSERT and UPDATE. We also have DELETE HTTP verb that makes perfect sense unlike others.
The term hypertext was coined by Ted Nelson in 1965 in the Xanadu Project, which was in turn inspired by Vannevar Bush's 1930s vision of the microfilm-based information retrieval and management "memex" system described in his 1945 essay "As We May Think". Tim Berners-Lee and his team at CERN are credited with inventing the original HTTP, along with HTML and the associated technology for a web server and a text-based web browser.
Http (aka Hypertext Transfer Protocol (HTTP)) was created by Ted Berners-Lee and his team.
SQL was initially developed at IBM by Donald D. Chamberlin and Raymond F. Boyce after learning about the relational model from Ted Codd[15] in the early 1970s.
SQL was developed by people at IBM
Different people developed them, and thus they named similar actions, differently.
I'm not quite certain why you believe SQL terms are "traditionally used words in computer science vocabulary." They're not even ubiquitous in database programming. When I worked in dBASE, it used LIST for a similar concept to SQL's SELECT. Why do you believe SQL had a special place in history when HTTP was being developed?
The first version of HTTP only had one verb: GET. It did the most obvious thing: get a page. HTTP wasn't built around CRUD. It was built around fetching hyperlinked pages. Later HEAD (for efficiency) and POST (for interaction) came along. It still had nothing to do with databases.
After years of use, concepts like REST have been built on top of HTTP, but it's a long way from what HTTP was designed for.
SELECT would be a very bad verb to replace GET. GET fetches a document based on its address. It's not a general query verb.

Is H2O R package safe to use for secured ( Patient ) data? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
The H2O R package is a great resource for building predictive models. But I am concerned with the security aspect of it.
Is it safe to use patient data with H2O in terms of security vulnerabilities ?
After data ingestion into H2O-3, the data lives in-memory inside the java server process. Once the H2O process is stopped, the in-memory data vanishes.
Probably the main thing to be aware of is your data is not sent to a SaaS cloud service or anything like that. The H2O-3 java instance itself handles your data. You can create models in a totally air-gapped, no-internet environment.
So the short answer is, it’s perfectly safe if you know what threats you are trying to secure against and do the right things to avoid the relevant vulnerabilities (including data vulnerabilities like leaking PII and software vulnerabilities like not enabling passwords or SSL).
You can read about how to secure H2O instances and the corresponding R client here:
http://docs.h2o.ai/h2o/latest-stable/h2o-docs/security.html
(Note if you have a high-value use case and want detailed personal help with this kind of thing, H2O.ai the company offers paid enterprise support.)

Issue in lesson 6 of "CREATE AND MANAGE API" course of apigee [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
I am facing issue while submitting the results for lesson 6 of "CREATE AND MANAGE API" course of APIGEE. I'm not able to clear it. Though while checking manually the api proxy works as expected
from console I'm getting 200OK and response payload body as expected and while submitting it for WEEK 6 TEST, I'm getting message as not good which is to say, the response that came back wasn't something we expected. So it's hard to say what went wrong. Here's what we got back with
"HTTP status: 200"
It is possible you are hitting a defect in the test tool. One workaround that was found was to remove the JSON threat protection policy from the proxy side flow. If that doesn't work for you, email the curl commands to get an access token and to get a joke to help#apigee.com and we'll evaluate it manually.

Checking to see if their zip code is eligible [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
Hello I am working on a web page in drupal. One of the content is about a scholarship and there are certain zipcodes that are eligible for that scholarship. I was wondering if there is to have a search box within that web page were the user types in there zip code and than tells you if they are eligible or not I was thinking some javascript, but I was wondering if there is any better ideas. Thanks!
Sure, you could use javascript on the client side or php (as Drupal is in php) on the server side. The tradeoff with the javascript approach is you'll have to send all the valid zip codes (or some rule that computes them) to the client every time your page is loaded. But the upside is then it'll be very fast for the client to try various zip codes (since no server communication will be needed). And this may be easier for you to code.
For your use, you'd probably get better overall performance doing this in php on the server. But then you'll need to be familiar with some form of client-server communication (ajax for instance) so that you can send the zip code back to the server and listen for a response.

i would like to add a bcc to all mails going out from my postfix [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
i have a forum which sends out many automatic mails. i would like to gather the mails to have an overview. how can i add a bcc to all outgoing mails excluding some mail-subjects or early hours when especially many mails are sent?
There is an always_bcc parameter in Postfix you could set for that. Or modify the configuration of your forum to add yourself as a Bcc: for these mails. Be aware of the privacy issues of such configuration though (IANAL of course). Check the documentation on The Postfix site
If your forum always sends out mail as a specific sender, there is sender_bcc_maps.
sender_bcc_maps = hash:/etc/postfix/bcc_senders
bcc_senders contains
forum#example.com your_bcc_recipient#example.com
Then you can add filters on the recipient side to exclude the mail you want to ignore.
If you're FORUM software is running on a server that is either hosting with Postfix, or relaying it's mail messages through another host that you control (and is running Postfix) [presumptions made from the post being tagged with Postfix].
Two partial solutions have been specified for getting a blind carbon copy by Keltia use always_bcc and Devdas use sender_bcc_maps.
In the forum software, you could always modify the software to send an extra copy to your designated BCC email account.
To provide the filtering:
Blockquote
excluding some mail-subjects or early hours when especially many mails are sent
If your BCC destination is a unix account then you can use .forward to send the message to processing by another script. Procmail has some filtering capabilities that might be helpful here.
Update the forum software logic to specify when you want a BCC.

Resources