I want to collect all the information that we could when someone is visiting a webpage: e.g.:
clients screen resolution: <script type='text/javascript'>document.write(screen.width+'x'+screen.height); </script>
referer: <?php print ($_SERVER['HTTP_REFERER']); ?>
client ip: <?php print ($_SERVER['REMOTE_ADDR']); ?>
user agent: <?php print ($_SERVER['HTTP_USER_AGENT']); ?>
what else is there?
Those are the basic pieces of information. Anything beyond that could be viewed as SpyWare-like and privacy advocates will [justifiably] frown upon it.
The best way to obtain more information from your users is to ask them, make the fields optional, and inform your user of exactly what you will be using the information for. Will you be mailing them a newsletter?
If you plan to eMail them, then you MUST use the "confirmed opt-in" approach -- get their consent (by having them respond to an eMail, keyed with a special-secret-unique number, confirming that they are granting permission for you to send them that newsletter or whatever notifications you plan to send to them) first.
As long as you're up-front about how you plan to use the information, and give the users options to decide how you can use it (these options should all be "you do NOT have permission" by default), you're likely to get more users who are willing to trust you and provide you with better quality information. For those who don't wish to reveal any personal information about themselves, don't waste your time trying to get it because many of them take steps to prevent that and hide anyway (and that is their right).
Get all the information of client's machine with this small PHP:
<?php
foreach($_SERVER as $key => $value){
echo '$_SERVER["'.$key.'"] = '.$value."<br />";
}
?>
The list that is available to PHP is found here.
If you need more details than that, you might want to consider using Browserhawk.
For what end?
Remember that client IP is close to meaningless now. All users coming from the same proxy or same NAT point would have the same client IP. Years go, all of AOL traffic came from just a few proxies, though now actual AOL users may be outnumbered by the proxies :).
If you want to uniquely identify a user, its easy to create a cookie in apache (mod_usertrack) or whatever framework you use. If the person blocks cookies, please respect that and don't try tricks to track them anyway. Or take the lesson of Google, make it so useful, people will choose the utility over cookie worries.
Remember that Javascript runs on the client. Your document.write() will show the info on their webpage, not do anything for your server. You'd want to use Javascript to put this info in a cookie, or store with a form submission if you have any forms.
I like to use something like this:
$log = array(
'ip' => $_SERVER['REMOTE_ADDR'],
're' => $_SERVER['HTTP_REFERER'],
'ag' => $_SERVER['HTTP_USER_AGENT'],
'ts' => date("Y-m-d h:i:s",time())
);
echo json_encode($log);
You can save that string in a file, the JSON is pretty small and is just one line.
phpinfo(32);
Prints a table with the whole extractable information. You can simply copy and paste the variables directly into your php code.
e.g:
_SERVER["GEOIP_COUNTRY_CODE"] AT
would be in php code:
echo $_SERVER["GEOIP_COUNTRY_CODE"];
get all the outputs of $_SERVER variables:
<?php
$test_HTTP_proxy_headers = array('GATEWAY_INTERFACE','SERVER_ADDR','SERVER_NAME','SERVER_SOFTWARE','SERVER_PROTOCOL','REQUEST_METHOD','REQUEST_TIME','REQUEST_TIME_FLOAT','QUERY_STRING','DOCUMENT_ROOT','HTTP_ACCEPT','HTTP_ACCEPT_CHARSET','HTTP_ACCEPT_ENCODING','HTTP_ACCEPT_LANGUAGE','HTTP_CONNECTION','HTTP_HOST','HTTP_REFERER','HTTP_USER_AGENT','HTTPS','REMOTE_ADDR','REMOTE_HOST','REMOTE_PORT','REMOTE_USER','REDIRECT_REMOTE_USER','SCRIPT_FILENAME','SERVER_ADMIN','SERVER_PORT','SERVER_SIGNATURE','PATH_TRANSLATED','SCRIPT_NAME','REQYEST_URI','PHP_AUTH_DIGEST','PHP_AUTH_USER','PHP_AUTH_PW','AUTH_TYPE','PATH_INFO','ORIG_PATH_INFO','GEOIP_COUNTRY_CODE');
foreach($test_HTTP_proxy_headers as $header){
echo $header . ": " . $_SERVER[$header] . "<br/>";
}
?>
Related
When I paste my website link at a social network (Facebook or Twitter for example), the social network access my site to show a preview to the user.
I want to separate this access from real access at my reports, but to do this, I need to identify this cases.
This kind of access send any kind of information that is default for everysite that I can identify that this access is not a real user, but a robot?
You should be able to identify those robots from their user-agent string.
For example, Twitter uses the User-Agent of Twitterbot. And Facebook crawler identification is documented here.
You can use the User-Agent to do that.
if (strpos($_SERVER["HTTP_USER_AGENT"], "Twitterbot") !== false)
echo "TwitterBot";
else if (strpos($_SERVER["HTTP_USER_AGENT"], "facebookexternalhit") !== false)
echo "Facebook";
else
echo "regular user";
https://developers.facebook.com/docs/sharing/webmasters/crawler
I've worked through the twilio tutorials regarding sending and receiving SMS with WordPress. I integrated them into a test install I have and then merged them into one. (The receive one is pretty short, although it's not a full "receive" more than a blind response).
Then I came across themebound's twilio-core and so I had a look at that and quickly I got a fatal error because they both use the twilio helper library. For testing, I just deactivated the first one and activated the second, which leads me into my first question:
Both of these use the same library, and have both used require_once. Each loaded it into their own plugin folder. The original name of the library is twilio-php-master, one renames it twilio the other twilio-php. Not that it matters at all, since they're in separate locations. The fatal error is as a result of the inner workings having the same function names.
How can I test for the existence of the other plugins helper library and use that in place of the one that I have installed?
What about the order of how WordPress loads the plugins? It's likely if mine loads first... Well, I can't even get that far because it crashes just trying to have both activated.
--- we're now leading into the next question ---
As a result, I'm probably going to go with the twilio-core version because it seems more featured and is available on github, even if the author isn't overly active (there's a pull request that's months old and no discussion about it).
I want to extend the functionality of one of the sending plugins (either one) with respect to the receipt of the message from twilio. At the moment, both examples use classes for the sending and the receive that I'm using is not. As such I have just added it to the end of the main plugin file (the one with the plugin metadata). (a quick note, this is not the cause of the above fatal error, this happened before I started merging the send and receive).
Because the receive is involved with the REST API and not initiated by a user action on the system (ie someone in the admin area accessing the class through the admin panel), I'm not sure if it's appropriate that a) I put it there, and b) use the send function inside the class when further processing the receipt. I have an end goal of analysing the incoming message and forwarding it back out through twilio or other application or even just recording it in wordpress itself.
Should I keep the receive functionality/plugin separate from the sending one?
And this leads on to the hardest question for me:
How would I extend either plugin to make the send function available to my receive plugin? (this is where part of my confusion comes from) -->> Because both plugins only operate in the admin area, and the REST API isn't an actual user operating in the front-end, how can I call those functions in the admin area? Will is "just be available"? Do I have to replicate them on the public side? and then if so, is it necessary to have it in the admin area as well?
edit: With respect to one of the comments below, I have tested and twl_send_sms is available once the helper is loaded. What I will do is determine a way to see if the helper is loaded (a function exists test will probably suffice) and if so, require or not require my version as appropriate.
From the receive message I am now able to craft a separate forward of a new message. But how can I pass the callback function the parameters of the initial inbound message? eg, how do I populate $sms_in with the POST data?
function register_receive_message_route() {
register_rest_route( 'sms/v1', '/receiver_sms', array(
'methods' => 'POST',
'callback' => 'trigger_receive_sms',
) );
}
function trigger_receive_sms($sms_in = '') {
/* we have three things to do:
* 1: send the reply to twilio,
* 2: craft a reply,
* 3: save the data message to the database
*/
echo header('content-type: text/xml');
echo ('<?xml version="1.0" encoding="UTF-8"?>');
echo ('<Response>');
echo (' <Message>Thanks, someone will be in contact shortly.</Message>');
echo ('</Response>');
$args = array(
'number_to' => '+xxxxxxxxxxx',
'message' => "Test Worked!\n$sms_in",
);
twl_send_sms( $args );
Twilio developer evangelist here.
It sounds to me as though your send functionality (whatever you can get out of twilio-core) is separate to your receive functionality. So I would likely split those up as two plugins and follow the instructions from the post you mentioned on how to write something that receives and responds to SMS messages. That way, twilio-core can use the Twilio library it bundles and your receive plugin won't need to use a library (it just needs to respond with TwiML, which is just XML).
I'm not sure I understand the last part of question 4 though, you can't really interact yourself with the receive endpoint of the plugin because all it will do is return XML to you. That's for Twilio to interact with.
Edit
In answer to your updated question, when your callback is triggered by a POST request it is passed a WP_REST_Request object. This object contains the POST body and the parameters can be accessed by array access.
function trigger_receive_sms($request) {
echo $request['Body'];
}
In good news, if you just plan to send two messages then you can do so entirely with TwiML. You just need to use two <Message>s:
echo ('<?xml version="1.0" encoding="UTF-8"?>');
echo ('<Response>');
echo (' <Message>Thanks, someone will be in contact shortly.</Message>');
echo (' <Message to="YOUR_OTHER_NUMBER">It worked!</Message>');
echo ('</Response>');
This way you don't need to worry about that other library.
I want to implement a very basic RSS feed for a website that has an FAQ.
Subscribers will be informed about new questions/answers. That's great.
But questions will have essential changes in content from time to time (e.g. when a better answer for a question is found). Is RSS able to inform subscribers that such an answer has changed?
If not, what could be a good workaround? I'm thinking of offering another RSS feed which only announces changes to existing questions. Is this "the right way to go"?
The answer depends on what kind of client you want to implement your feed for.
If you are talking about simple RSS readers, there is no real standard to update only one entry that have changed, but most of them are doing long polling, effectively getting all the feed again with the update.
But there are protocols called light and fat pinging that are specifically designed to handle what you want to do, the difference between the two are:
Light pinging means that the url of the feed that changed will be sent to the subscriber
Fat pinging means that the updated content of the feed that changed will be sent to the subscriber
One pretty popular protocol (fat-ping) and backed by Google is called PubSubHubbub. There is a few services using it already, like Blogger, Youtube, MySpace, Tumblr or Wordpress. They have open-source clients for a lot of languages available on their Github, I recommend taking a look at their wiki if you are interested, which is pretty complete and informative.
Using a second feed for updates do not seems like a good idea, since that's not what it's designed for and would mean that you have to implement a client for the people to install to effectively get the updates.
I would do something with edit or update dates, titles and other XML node values compair old versions with the new versions if a node has been changed to another value, for example:
If you have a variable for your sml nodes that can be changed to an other value you'll be able to check for a change with if statements, for this example I'll check the value of $title..
<?php
$title = 'My awesome title';
/** Here your php code to get the content and assign variable to all nodes **/
$title_old = $title; /** Get the info your xml script or cached XML data **/
$title_new = 'My awesome title'; /** Use YOUR new value! - requestt the content again after someone clicked the save button or use a crontab to get info on changes every x minutes, hours or days **/
if($title_new == $title_old){
$title_changed = 'No changes detected to the title node';
echo $title_changed . ': The title is "' . $title . '"';
}else{
$title_changed = 'The title node has been changed';
echo $title_changed . ': The new title is "' . $title . '"';
}
?>
I hope this will bring you into the right direction..
You could format the links in such a way that they include a hash or and ID of the last answer. This way the links will get picked up as fresh, and if you have a custom feed reader it can trace if the link is new or was read by the user
We are working on a PROXY based protection software. It catches the user http request, do the proxy stuffs, and catch the http response, modify its content and send it back to the original user.
We had 2 tries:
SQUID proxy and a PHP rind out of SQUID.
It was promising, but at PHP stream we did not know about the length of response data we were expected, so it was timeouting every time => SLOW
Now, we wrote a .net application. It does everything we need, and its pretty fast even does not modify the content. If we need to GZIP/GUNZIP, or just modify the content, it becomes very slow.
Could you help us?
We are working on this project for almost a year in our University in Hungary. We wrote a automatic, self learning full semantical analizer engine, which can analyze and interpret in all language, and can detect and screen the target content. We also built an image recognition software, which can detect the target object in 90% confidence in all image.
So everything is ready, but our proxy application is stucked.
We also could pay for this job, if anybody would write it.
I spend a lot of time programming in PHP - yes, as an interpreted language it can be slow - and there is a huge amount of badly written code available - but even before you start to touch the code, tuning the environment can reduce execution time by a factor of 5-10. Then changing the code can make it go faster still; the biggest wins come from good choices for architecture and data structures (which is true of any language - not just PHP).
I don't know where you're starting from but find it surprising that you are not able to process the stream relative to the amount of time taken to generate the content and send it across the network. For it to be timing out something is very wrong. (you're not trying to parse the HTML using the one of the XML parsers are you?). The length of the content shoul have little impact on the performance of the script unless you are trying to map it all into PHP's address space at the same time.
However AFAIK, it's not possible to implement a content filter directly into Squid using PHP (if you did, I'd love to know how you did it, also if you've implemented ICAP, that's very interesting). I'm guessing that you are using a URL redirector to route the requests via a proxy script written in PHP.
It is possible to write an ECAP module in C/C++.
Image recognition and natural language processing are not trivial exercises in programming - so you must have some good programmers working on your team. Really addressing your problem goes rather beyond the scope of a stack overflow answer, and touting for contractors is definitely off topic.
Thanks for your reply!
First of all: our PHP is pretty fast, the fsockopen is slow, because it cannot know when to close the response connection from SQUID.
Here is our code:
$buffer = socket_read($client, 4096);
if ( !($handle = fsockopen(HOST, SQUIDPROXYPORT, $errno, $error, 1)) ) {
Log::write($this->log, 'Errno: ' . $errno . ' Error: ' . $error . "\n" . $buffer);
exit('Nem sikerült csatlakozni! ' . $errno . ':' . $error);
}
stream_set_timeout($handle, 0, 100000);
fwrite($handle, $buffer);
$result = '';
do {
$tmp = fgets($handle, 1024);
if ( $tmp ) {
$result .= $tmp;
}
} while ( !feof($handle) && $tmp != false );
socket_write($client, $result, strlen($result));
fclose($handle);
socket_close($client);
Again, how it works:
Client send HTTP request to us
Our PHP get the request, and send its header to SQUID proxy
Squid does its stuff, and send the response data back to our PHP
Our PHP gets by fsockopen the response data from squid
We analyze the response data, or modify it
We send it back to client
BUT:
While we are waiting for the response data, we receive it, but we cannot know, at what time to close the connection between our PHP and SQUID. This results a slow work, and timeout at almost every time.
If you have any idea, plesa share with us!
I need to send an e-mail from my custom php code using the Drupal service.
In other words, I've noticed Drupal can easily send emails so I would like to know how can I use drupal libraries to send the emails, instead of using external libraries.
You need to use drupal_mail(), and define hook_mail() in your module.
You'll then call the first one, and the e-mail information will be set by the second one.
(See the example on the documentation page for drupal_mail().)
$message = array(
'to' => 'example#mailinator.com',
'subject' => t('Example subject'),
'body' => t('Example body'),
'headers' => array('From' => 'example#mailinator.com'),
);
drupal_mail_send($message);
Caveats:
Because drupal_mail() isn't called, other modules will not be able to hook_mail_alter() your output, which can cause unexpected results.
drupal_mail_send() is ignorant about which language to send the message in, so this needs to be figured out beforehand.
You'll have to manually specify any other e-mail headers that are required ('Content-Type', etc.). These are normally taken care of for you by drupal_mail().
In the case where your module sends several different types of emails, and you want those email templates to be editable (for example, user module's various registration notification/password reset/etc. e-mails), using hook_mail() is still the best way to go.
This is what reported in a comment in the documentation for drupal_mail(). If the caveats are not important in your case, then you can use the reported snippet.
I would recommend to use mimemail. It's a contrib module that will allow you to send HTML mails (+ attachments) or plaintext-only messages with ease. Here is an excerpt from the readme file:
the mimemail() function:
$sender - a user object or email address
$recipient - a user object or email address
$subject - subject line
$body - body text in html format
$plaintext - boolean, whether to send messages in plaintext-only (default false)
This module creates a user preference for receiving plaintext-only messages. This preference will be honored by all calls to mimemail()
Link: http://drupal.org/project/mimemail
I hope this will help you!