Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
Hello I am working on a web page in drupal. One of the content is about a scholarship and there are certain zipcodes that are eligible for that scholarship. I was wondering if there is to have a search box within that web page were the user types in there zip code and than tells you if they are eligible or not I was thinking some javascript, but I was wondering if there is any better ideas. Thanks!
Sure, you could use javascript on the client side or php (as Drupal is in php) on the server side. The tradeoff with the javascript approach is you'll have to send all the valid zip codes (or some rule that computes them) to the client every time your page is loaded. But the upside is then it'll be very fast for the client to try various zip codes (since no server communication will be needed). And this may be easier for you to code.
For your use, you'd probably get better overall performance doing this in php on the server. But then you'll need to be familiar with some form of client-server communication (ajax for instance) so that you can send the zip code back to the server and listen for a response.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
In web world using the HTTP protocol why do we have verbs like GET, POST and PUT instead of traditionally used words in computer science vocabulary like SELECT, INSERT and UPDATE. We also have DELETE HTTP verb that makes perfect sense unlike others.
The term hypertext was coined by Ted Nelson in 1965 in the Xanadu Project, which was in turn inspired by Vannevar Bush's 1930s vision of the microfilm-based information retrieval and management "memex" system described in his 1945 essay "As We May Think". Tim Berners-Lee and his team at CERN are credited with inventing the original HTTP, along with HTML and the associated technology for a web server and a text-based web browser.
Http (aka Hypertext Transfer Protocol (HTTP)) was created by Ted Berners-Lee and his team.
SQL was initially developed at IBM by Donald D. Chamberlin and Raymond F. Boyce after learning about the relational model from Ted Codd[15] in the early 1970s.
SQL was developed by people at IBM
Different people developed them, and thus they named similar actions, differently.
I'm not quite certain why you believe SQL terms are "traditionally used words in computer science vocabulary." They're not even ubiquitous in database programming. When I worked in dBASE, it used LIST for a similar concept to SQL's SELECT. Why do you believe SQL had a special place in history when HTTP was being developed?
The first version of HTTP only had one verb: GET. It did the most obvious thing: get a page. HTTP wasn't built around CRUD. It was built around fetching hyperlinked pages. Later HEAD (for efficiency) and POST (for interaction) came along. It still had nothing to do with databases.
After years of use, concepts like REST have been built on top of HTTP, but it's a long way from what HTTP was designed for.
SELECT would be a very bad verb to replace GET. GET fetches a document based on its address. It's not a general query verb.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
The H2O R package is a great resource for building predictive models. But I am concerned with the security aspect of it.
Is it safe to use patient data with H2O in terms of security vulnerabilities ?
After data ingestion into H2O-3, the data lives in-memory inside the java server process. Once the H2O process is stopped, the in-memory data vanishes.
Probably the main thing to be aware of is your data is not sent to a SaaS cloud service or anything like that. The H2O-3 java instance itself handles your data. You can create models in a totally air-gapped, no-internet environment.
So the short answer is, it’s perfectly safe if you know what threats you are trying to secure against and do the right things to avoid the relevant vulnerabilities (including data vulnerabilities like leaking PII and software vulnerabilities like not enabling passwords or SSL).
You can read about how to secure H2O instances and the corresponding R client here:
http://docs.h2o.ai/h2o/latest-stable/h2o-docs/security.html
(Note if you have a high-value use case and want detailed personal help with this kind of thing, H2O.ai the company offers paid enterprise support.)
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I'm trying to figure out an issue I'm having with sitecore. I'm wondering if my issue is basically a problem with their reliance on Session.Abandon():
For performance reasons Sitecore only writes contact data to xDB (this is mongo) when
the session ends.
This logic seems somewhat flawed (unless I misunderstand how sessions are managed in Asp.Net).
At what point (without explicitly calling Session.Abandon()) is the session flushed in this model? i.e. When will the session_end event be triggered?
Can you guarantee that the logic will always be called or can
sessions be terminated without triggering an Abandon event? for example when the app_pool is recycled.
I'm trying to figure this out as it would explain something that I'm experiencing, where the data is fine in session but is written intermittently into the mongoDb
I think that strategy for building the data in session and then flushing the data to MongoDb fits for xDb.
xDb is designed to be high volume so it makes sense for the data to be aggregated rather than constantly being written into a database table. This is the way DMS worked previously and doesn't scale very well.
The session end in my opinion is pretty reliable, and Sitecore give you various option for persisting session (inproc, mongo, SQL server), MongoDb and SQL Server are recommended for production environments. You can write Contact data directly to MongoDb by using the Contact Repository api but for live capturing of data you should use the Tracker api. When using the tracker api, as far as I am aware, the only way to get data into MongoDb is to flush the session.
If you need to flush the data to xDb for testing purposes then Session.Abandon() will work. I have a module here which you can use for creating contacts and then flushing the session, so you can see how reliable the session abandon is by checking in MongoDb.
https://marketplace.sitecore.net/en/Modules/X/xDB_Contact_Creator.aspx
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
We have come across 2 ways to do cache breaking for our CSS files.
Cache breaker passed as a query parameter:
http://your1337site.com/styles/cool.css?v=123
Cache breaker as part of the name:
http://your1337site.com/styles/123.cool.css
Which way is better? And why?
I feel that the second way is more verbose, because the file matches the name on the folder structure. Where as the first way is good if you want to share "cool.css" on other parts of the site, which don't have access to the unique name you generate each time.
Steve Souder's article Revving Filenames: don’t use querystring makes a good argument for changing the filename as the better of the two.
...a co-worker, Jacob Hoffman-Andrews, mentioned that Squid, a popular proxy, doesn’t cache resources with a querystring. This hurts performance when multiple users behind a proxy cache request the same file – rather than using the cached version everybody would have to send a request to the origin server.
As an aside, Squid 2.7 and above does cache dynamic content with the default configuration
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
i have a forum which sends out many automatic mails. i would like to gather the mails to have an overview. how can i add a bcc to all outgoing mails excluding some mail-subjects or early hours when especially many mails are sent?
There is an always_bcc parameter in Postfix you could set for that. Or modify the configuration of your forum to add yourself as a Bcc: for these mails. Be aware of the privacy issues of such configuration though (IANAL of course). Check the documentation on The Postfix site
If your forum always sends out mail as a specific sender, there is sender_bcc_maps.
sender_bcc_maps = hash:/etc/postfix/bcc_senders
bcc_senders contains
forum#example.com your_bcc_recipient#example.com
Then you can add filters on the recipient side to exclude the mail you want to ignore.
If you're FORUM software is running on a server that is either hosting with Postfix, or relaying it's mail messages through another host that you control (and is running Postfix) [presumptions made from the post being tagged with Postfix].
Two partial solutions have been specified for getting a blind carbon copy by Keltia use always_bcc and Devdas use sender_bcc_maps.
In the forum software, you could always modify the software to send an extra copy to your designated BCC email account.
To provide the filtering:
Blockquote
excluding some mail-subjects or early hours when especially many mails are sent
If your BCC destination is a unix account then you can use .forward to send the message to processing by another script. Procmail has some filtering capabilities that might be helpful here.
Update the forum software logic to specify when you want a BCC.