I use simplepie to merge rss feeds. I have two sections on this page which do the same with different feeds. i sort them. here is my code:
$feeds=array(
'http://feeds.feedburner.com/Belvederegasse',
'http://diealternative.org/zeitschrift/feed/rss/',
'http://diealternative.org/arbeitsklima/feed/rss/',
'http://feeds.feedburner.com/Arbeitszeit',
'http://feeds.feedburner.com/AugeFinance',
'http://diealternative.org/nulllohnrunden/feed/',
'http://diealternative.org/bulletin/feed/rss/',
'http://feeds.feedburner.com/verteilungsgerechtigkeit',
'http://feeds.feedburner.com/hochschule',
'http://feeds.feedburner.com/Sozialmilliarde'
);
$first_items = array();
foreach ($feeds as $url)
{
$feed = new SimplePie();
$feed->set_stupidly_fast(true);
$feed->enable_order_by_date(true);
$feed->enable_cache(true);
$feed->set_feed_url($url);
$feed->init();
$items_per_feed = 5;
for ($x = 0; $x < $feed->get_item_quantity($items_per_feed); $x++){
$first_items[] = $feed->get_item($x);
}
unset($feed);
}
function sort_items($a, $b){
return SimplePie::sort_items($a, $b);
}
usort($first_items, "sort_items");
foreach ($first_items as $item):
if ($itemlimit==8) { break; }
//HTML output
$itemlimit = $itemlimit + 1;
endforeach;
?>
When you visit the page http://diealternative.org it takes more than 20 seconds to load. Thats no surprise because there are lots of feeds to parse but whats about the caching? After once loaded it should not reconnect to the feeds, it should use the cache.
Why it takes so long every time to load.
A couple things: First, by default it only caches for 60 minutes I think - I set mine to 24 hours so that it saves the cache for the whole day. The first load is slow, but speeds up after that. Also, the set_stupidly_fast function may possibly disable caching or something.
Secondly, SimplePie will merge the feeds for you and sort by date, you don't need a nested loop like that. Try this:
$max_items_per_feed = 5; // this pulls the top 5 articles from each feed
$max_items_total = 50; // this caps the total articles
$feed = new SimplePie();
$feed->set_feed_url($feeds);
// limit the number of items
$feed->set_item_limit($max_items_per_feed);
$feed->enable_cache(true);
$feed->set_cache_duration(86400); // refresh cache once a day - 24 hrs
// Run SimplePie.
$success = $feed->init();
// This makes sure that the content is sent to the browser as text/html and the UTF-8 character set (since we didn't change it).
$feed->handle_content_type();
foreach ($feed->get_items(0, $max_items_total) as $key=>$item) {
...
}
What worked for me is to use Simplepie to generate separately html files. These i save to the server and include them on the main page. By using cronjobs I keep them updating.
The loading time of the website is now good. (up to 150% faster!)
tom
Related
I need to perform an OCR analysis on an image for an university project.
I am imposed to use PHP, unfortunately, on the Google Cloud Vision Documentation few are the code sample using PHP...
I succeed to perform OCR on one image at time but 80% of the time I have a lot of images (around 20) to treat at once.
So I tried to use BatchRequest, to minimize the API calls, as specified here but I can't found how to build the $requests array they put at the top.
Btw I tried other APIs like Tesseract but the recognition is not accurate enough to use.
If you only want to perform a batch request you can just use batchAnnotateImages using ImageAnnotatorClient. Below you can find a sample using it as well as a way to create a request variable. Also, I include below a asyncBatchAnnotateImages sample but I recommend the one I mentioned earlier.
Using ImageAnnotatorClient with batchAnnotateImages
<?php
require '../vendor/autoload.php';
use Google\Cloud\Storage\StorageClient;
use Google\Cloud\Vision\V1\Feature;
use Google\Cloud\Vision\V1\Feature_Type;
use Google\Cloud\Vision\V1\ImageAnnotatorClient;
use Google\Cloud\Vision\V1\Image;
use Google\Cloud\Vision\V1\ImageSource;
use Google\Cloud\Vision\V1\AnnotateImageRequest;
use Google\Cloud\Vision\V1\Likelihood;
$client = new ImageAnnotatorClient();
try {
$feature = (new Feature())
->setType(Feature_Type::FACE_DETECTION);
$image = (new Image())
->setContent(file_get_contents("../images/family.jpg","r"));
$request = (new AnnotateImageRequest())
->setImage($image)
->setFeatures([$feature]);
$requests = [$request];
# note: you can add as many requests you want to perform. ie: [$request,$request2,..,..]
$results = $client->batchAnnotateImages($requests);
foreach($results->getResponses() as $result){
foreach ($result->getFaceAnnotations() as $faceAnnotation) {
$likelihood = Likelihood::name($faceAnnotation->getJoyLikelihood());
echo "Likelihood of headwear: $likelihood" . PHP_EOL;
}
}
} finally {
$client->close();
}
Using ImageAnnotatorClient with asyncBatchAnnotateImages
<?php
require '../vendor/autoload.php';
use Google\Cloud\Storage\StorageClient;
use Google\Cloud\Vision\V1\Feature;
use Google\Cloud\Vision\V1\Feature_Type;
use Google\Cloud\Vision\V1\ImageAnnotatorClient;
use Google\Cloud\Vision\V1\Image;
use Google\Cloud\Vision\V1\ImageSource;
use Google\Cloud\Vision\V1\AnnotateImageRequest;
use Google\Cloud\Vision\V1\asyncBatchAnnotateImages;
use Google\Cloud\Vision\V1\OutputConfig;
use Google\Cloud\Vision\V1\GcsDestination;
$client = new ImageAnnotatorClient();
try {
$feature = (new Feature())
->setType(Feature_Type::FACE_DETECTION);
$gcsImageUri = 'gs://<YOUR BUCKET ID>/<YOUR IMAGE FILE>';
$source = new ImageSource();
$source->setImageUri($gcsImageUri);
$image = (new Image())
->setSource($source);
$request = (new AnnotateImageRequest())
->setImage($image)
->setFeatures([$feature]);
$requests = [$request];
$gcsDestination = (new GcsDestination())
->setUri("gs://<YOUR BUCKET>/<OUTPUT FOLDER>/");
$outputConfig = (new OutputConfig())
->setGcsDestination($gcsDestination);
$operationResponse = $client->asyncBatchAnnotateImages($requests, $outputConfig);
$operationResponse->pollUntilComplete();
if ($operationResponse->operationSucceeded()) {
$result = $operationResponse->getResult();
var_dump($result);
#Your Folder output will have your file processing results.
}
} finally {
$client->close();
}
Note: To add on this, you can also check an official implementation of a similar case using vision client on this link but its a sample to detect text on a pdf file.
You can also find additional information on these links:
ImageAnnotatorClient
AnnotateImageRequest
AsyncBatchAnnotateImages
BatchAnnotateImagesResponse
AsyncBatchAnnotateImagesResponse
PHP CLOUD VISION Github Project Page
I am working on a wordpress plugin,
I want to create a pdf file, For this I used fpdf
By using this code I am able to generate pdf and save it to server
require('fpdf/html_table.php');
$pdf=new PDF_HTML();
$questions = $_POST['question'];
$count = count($questions);
$quests = "";
$pdf->SetFont('times','',12);
$pdf->SetTextColor(50,60,100);
$pdf->AddPage('P');
$pdf->SetDisplayMode(real,'default');
$pdf->SetFontSize(12);
for($i=1;$i<=$count;$i++)
{
$qus_id = $questions[$i-1];
$get_q = "select * from `SelleXam_question` where `id`='$qus_id'";
$get_q = $wpdb->get_results($get_q);
$questf = "Quest $i : ".$get_q[0]->question;
$pdf->Cell(0, 10, $questf."\n");
$pdf->Ln();
}
$dir='C:/wamp/www/';
$filename= "filename.pdf";
$pdf ->Output($dir.$filename);
echo "Save PDF in folder";
But when it saved and displayed the messge Save PDF in Folder. I am unable to see the header part of the wordpress website.
Or when I use
$pdf->Output($filename,'D');
then is there any way that I can show the file in a link
When you are developing for Wordpress, you can't just echo text at any time. Wordpress has a whole series of actions it goes through to progressively generate the output rendered to the browser. If you generate output at an inappropriate time, you'll mess up Wordpress' ability to generate output at the right time.
If I were developing this, I'd have this function be called as a filter on the_content. Your output code would change to something like this:
function append_pdf_to_content($content) {
//Your existing code
$pdf->Output($filename, 'F');
$pdf_link = "<br><a href='$filename' download>Download PDF</a>"
return $content . $pdf_link;
}
add_filter('the_content', 'append_pdf_to_content');
If you wanted to use the D option, you'll need to link your users to a separate php page using a target='_blank' that calls the download. Otherwise the PDFs headers will override any headers Wordpress is trying to send.
BTW, you also might want to also take a look at mPDF, it's a fork of fpdf and remains in active development.
I have a very simple form created with Gravity Forms;
It submits two numbers and then redirects to a different result page.
How do I retrieve those two numbers on the result page?
add_filter("gform_confirmation_4", "custom_confirmation", 3, 4 );
function custom_confirmation($confirmation, $form, $lead, $ajax)
Gives a custom confirmation. Each field value can be retrieved by using $lead[{field ID}]
I have a solution for this based on using a combination of form submission hooks and the GForms API. It's a horrible plugin so I apologise for the messiness of the logic flow. It's important to use the framework methods rather than processing the data yourself since there are a good amount of hacks and shonky things going on in there to correctly match field IDs and so forth.
I will provide a solution to pass a submission from one form to pre-populate another. Changing the destination for POST data is pretty straightforward, they have an example for it on their gform_form_tag hook documentation page. Yes, that really is the only way of doing it.
Without further ado here is the code. I've set it up to work off form configuration to make things simpler for the end user, so it works like this:
Select "allow field to be populated dynamically" in your destination form field's advanced settings and choose a parameter name for each.
Add matching CSS classes on the source fields of the other form(s) to setup the associations.
Add a CSS class to the source forms themselves so that we can quickly check if the redirection is necessary.
.
$class = 'GForms_Redirector';
add_filter('gform_pre_submission', array($class, 'checkForSubmissionRedirection'), 10, 1);
add_filter('gform_confirmation', array($class, 'performSubmissionRedirection'), 10, 4);
abstract class GForms_Redirector
{
const SOURCE_FORMS_CLASS_MATCH = 'submission-redirect';
const DEST_PAGE_SLUG = 'submit-page-slug';
const DEST_FORM_ID = 1;
protected static $submissionRedirectUrl;
// first, read sent data and generate redirection URL
function checkForSubmissionRedirection($form)
{
if (false !== preg_match('#\W' . self::SOURCE_FORMS_CLASS_MATCH . '\W#', $form['cssClass'])) {
// load page for base redirect URL
$destPage = get_page_by_path(self::DEST_PAGE_SLUG);
// load form for reading destination form config
$destForm = RGFormsModel::get_form_meta(self::DEST_FORM_ID, true);
$destForm = RGFormsModel::add_default_properties($destForm);
// generate submission data for this form (there seem to be no hooks before gform_confirmation that allow access to this. DUMB.)
$formData = GFFormsModel::create_lead($form);
// create a querystring for the new form based on mapping dynamic population parameters to CSS class names in source form
$queryVars = array();
foreach ($destForm['fields'] as $destField) {
if (empty($destField['inputName'])) {
continue;
}
foreach ($form['fields'] as $field) {
if (preg_match('#(\s|^)' . preg_quote($destField['inputName'], '#') . '(\s|$)#', $field['cssClass'])) {
$queryVars[$destField['inputName']] = $formData[$field['id']];
break;
}
}
}
// set the redirect URL to be used later
self::$submissionRedirectUrl = get_permalink($destPage) . "?" . http_build_query($queryVars);
}
}
// when we get to the confirmation step we set the redirect URL to forward on to
function performSubmissionRedirection($confirmation, $form, $entry, $is_ajax = false)
{
if (self::$submissionRedirectUrl) {
return array('redirect' => self::$submissionRedirectUrl);
}
return $confirmation;
}
}
If you wanted to pass the form values someplace else via the querystring then you'd merely need to cut out my code from the callback and build your own URL to redirect to.
This is a very old question, now you can send it using a Query String on the confirmation settings.
They have the documentation on this link:
How to send data from a form using confirmations
Just follow the first step and it will be clear to you.
Got a bit of a weird issue here. I recently started doing maintenance on a website that I did not originally build. The Drupal site has a bunch of nodes with an audio file field, and a jQuery player that plays them. On a lot of the nodes the player does not load, and I've realized this is because the file is reported as being 0 bytes when I edit the node. I'm not sure why this is. At first I was thinking it might be a file permissions thing, but I don't think thats the case as the permissions look fine to me. To fix it all I had to do was re-upload the file. Problem is that there are hundreds of these files and I'd like to fix it just by making one update if that is possible.
Here is a working version of Terry's pseudocode for Drupal 7.
$fids = db_query('SELECT fid FROM {file_managed} WHERE filesize = 0')->fetchCol();
foreach(file_load_multiple($fids) as $file) {
// Get full path to file.
$target = drupal_realpath($file->uri);
// If the file does not exist try the next file.
if (!file_exists($target)) {
echo "File $file->uri does not exist." .PHP_EOL;
continue;
}
// Find and store size of file
$file->filesize = filesize($target);
// Size of file is
if ($file->filesize > 0) {
file_save($file);
}
else {
echo "Size of $file->uri is still zero." . PHP_EOL;
}
}
Run it with drush:
drush php-script fix-file-sizes.php
We had the same problem: a lot of 0 byte files in the database.
What we did looked something like this:
function update_my_filesizes() {
$fileIDs = db_query('SELECT fid FROM {file_managed) WHERE filesize = 0')->fetchCol();
foreach(file_load_multiple($fids) as $file) {
// determine size of file
$filesize = SIZE_OF_THE_FILE; (pseudocode, we had to use filesize())
$file->filesize = $filesize;
if($file->filesize > 0) {
file_save($file);
} else {
echo "Filesize for file <filename here> is still 0 :(";
}
}
}
I have an asp.net application that would 'simulate' real-time video. I did that acquiring multiple picture from an mysql database.
The problem is how would it be displayed on the web-page.? I refresh the page 1 second per picture, the result is the pictures are choppy and flickery.
Response.AppendHeader("Refresh", "1")
How can I make the refresh rate of the page 4times per second? or is there any implementation for it to be displayed in a continent way.
I would really appreciate if you will reply. good day (^_^)...
here is the script that i am using to read the images from the database,.
If dr.Read Then
dr.Read()
Response.ContentType = "image/jpeg" 'gets or sets the type of output stream
Response.BinaryWrite(dr.Item("file")) 'writes a stream of binary characters to the
http output stream
Else
Response.Write("There is no current active webccast.")
End If
dr.Close()
create a javascript method to change the image using xmlhttpobject and recursively set a timer
function Timer() {
setTimeout("getImage(['imageContainer1'])", 200);
t = setTimeout("Timer()", 100);
}
function getImage(params) {
var request=getXMLhttpObject();
makeRequest("ajax/ObtainImage.aspx?name='myimage'+param[1]+'.jpg",request, imageResponseHandler(request, params));
}
function getXMLhttpObject() {
return (navigator.appName == "Microsoft Internet Explorer")? new ActiveXObject("Microsoft.XMLHTTP"):new XMLHttpRequest();
}
function makeRequest(url,request, Handler) {
request.open("GET", url, true);
request.onreadystatechange = Handler;
request.send(null);
}
function imageResponseHandler(request,params) {
return function() {
if (request.readyState == 4)
document.getElementById(params[0]).src = request.responseText;
}
}
I would either use some Javascript/Ajax to change the content or the meta-refresh (probally not the best for fast refresh).
Maybe you need to think about loading more than one picture onto the page and using javascript to cycle between them. Rather than refreshing the page you could get the pictures using AJAX.
If you really want to simulate video, you need to be able to display at least 15 pictures each second (15fps). Making that many requests per second isn't a great idea.
If you absolutely must do this, I'd suggest "buffering" the pictures first, before displaying them, and if possible, fetching them in batches:
buffer = [] // cache loaded images
bufferSize = 30 // load 30 images before playing
function loadImage(src) {
var img = new Image()
img.src = src
buffer.push(img)
}
function animate(target) {
if (buffer.length > 0) {
var img = buffer.shift()
document.getElementById(target).src = img.src
}
}
function bufferImages() {
for (var i=0; i<bufferSize; i++) {
loadImage('/ajax/ObtainImage.aspx?name=img')
}
}
setInterval("animate('imgTarget')", 65) // should give us about 15fps
setInterval("bufferImages()", 1000) // every second
Add this to the head of your html document.
Not the most efficient way, but it will work.
<meta http-equiv="refresh" content=".25" />