I am working in php project to send pushnotification. I have signup and get api key at https://console.firebase.google.com. I have also get following code from some reference site.
/*
Parameter Example
$data = array('post_id'=>'12345','post_title'=>'A Blog post');
$target = 'single tocken id or topic name';
or
$target = array('token1','token2','...'); // up to 1000 in one request
*/
function sendMessage($data,$target){
//FCM api URL
$url = 'https://fcm.googleapis.com/fcm/send';
//api_key available in Firebase Console -> Project Settings -> CLOUD MESSAGING -> Server key
$server_key = 'AIzaSyBY5ZTQiCrFY6Syq9oymtlJODcwvkGyxmI';
$fields = array();
$fields['data'] = $data;
if(is_array($target)){
$fields['registration_ids'] = $target;
}else{
$fields['to'] = $target;
}
//header with content_type api key
$headers = array(
'Content-Type:application/json',
'Authorization:key='.$server_key
);
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_POST, true);
curl_setopt($ch, CURLOPT_HTTPHEADER, $headers);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, 0);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($ch, CURLOPT_POSTFIELDS, json_encode($fields));
$result = curl_exec($ch);
if ($result === FALSE) {
die('FCM Send Error: ' . curl_error($ch));
}
curl_close($ch);
return $result;
}
But my query is that how can i use it ? i means how can i get target/token to test ? can i get this required token without android ? Is there any console where i can get test token for specific mobile. I want to test above code and so.
In the application you will have a class that extends FirebaseInstanceIdService
Overwrites the following method and run the code
private static final String TAG = "MyFirebaseIIDService";
#Override
public void onTokenRefresh() {
//Getting registration token
String refreshedToken = FirebaseInstanceId.getInstance().getToken();
//Displaying token on logcat
Log.d(TAG, "Refreshed token: " + refreshedToken);
}
it displays the token on the AndroidStudio console
I am virtually at a brick wall with Symfony, I have a folder at /../compiled/ with some minified files, and the controller is configured to get files from getRootDir() + /../compiled/file.min.css
However, it just throws out an exception 'file not found' when call file_get_contents(file) even when the file actually exists.
I just don't know what is wrong, it is such a cryptic problem.
Code as requested:
public function atpInitAction(Request $request) // Validates the origin.
{
$content = null; $_file = $request->query->get('file');
if ($_SERVER["SERVER_NAME"] == atpHandler::getDomain())
{
// I AM FROM THE ORIGIN...
$webfolder = $this->get('kernel')->getRootDir() . '/../compiled';
$_file = $webfolder."/".$_file;
}
// What's my mime?
$_mime = 'text/plain';
if ($_file[strlen($_file)-2] == 'j') { $_mime = 'text/javascript'; }
else { $_mime = 'text/css'; }
$response = new Response();
$response->headers->set('Content-Type', $_mime);
$response->headers->set('Content-Disposition', 'filename="'.basename($_file) . '";');
$response->headers->set('Content-Length', filesize($_file));
$response->setContent(file_get_contents($_file));
$response->sendHeaders();
return $response;
}
For example, I would just like to capture the data for the 30 latest events for the scrolling info shown on this URL:
http://hazmat.globalincidentmap.com/home.php#
Any idea how to capture it?
What language are you using? In Java, you can get the page HTML content using something like this:
URL url;
InputStream is = null;
BufferedReader br;
String line;
try {
url = new URL("http://hazmat.globalincidentmap.com/home.php");
is = url.openStream(); // throws an IOException
br = new BufferedReader(new InputStreamReader(is));
while ((line = br.readLine()) != null) {
// Here you need to parse the HTML lines until
//you find something you want, like for example
// "eventdetail.php?ID", and then read the content of
// the <td> tag or whatever you want to do.
}
} catch (MalformedURLException mue) {
mue.printStackTrace();
} catch (IOException ioe) {
ioe.printStackTrace();
} finally {
try {
if (is != null) is.close();
} catch (IOException ioe) {
}
}
Example in PHP:
$c = curl_init('http://hazmat.globalincidentmap.com/home.php');
curl_setopt($c, CURLOPT_RETURNTRANSFER, true);
$html = curl_exec($c);
if (curl_error($c))
die(curl_error($c));
$status = curl_getinfo($c, CURLINFO_HTTP_CODE);
curl_close($c);
And then parse the contents of $html variable.
I am a total noob at Laravel
public function stageFiles($user, $project) {
try
{
$path = Input.get("item");
}
catch (Exception $e)
{
$exceptionMessage = $e->getMessage();
$responseText = json_encode($exceptionMessage);
}
}
What I want to do is return the response text and a status code, 500, how can I do this?
This is how you send a 500:
public function stageFiles($user, $project) {
try
{
$path = Input.get("item");
}
catch (Exception $e)
{
$exceptionMessage = $e->getMessage();
$responseText = json_encode($exceptionMessage);
return Response::make($responseText, 500);
}
}
I need to figure out how to scrape a website and download files from an authenticated website.
A script needs to
login to this website using a username/password
navigate through the pages to get to the download page
set some fields in the form and hit download button
save the downloaded file
I have been looking at Jsoup (since Java is my preference), but can also try scrapy etc. But I need to understand if these are commonly done and if there is some other technology to enable this.
I could set this up using something like Selenium, but I dont want a tool that uses a browser as a UA because of the huge additional overhead.
I am getting somewhere but the whole cookie management is getting very confusing.
Thanks,
Vivek
If you require a lot of interaction with the webpage as you describe there is no way around using a real browser - at least from my experience. Selenium webdriver however works great with phantomjs, so the overhead is not too big.
As pointed out in then comment below you can use something like mechanize as well, however such solutions tend to be usesless when there is javascript that changes the DOM on the pages. (see http://wwwsearch.sourceforge.net/mechanize/faq.html#script)
I recommend you use Fiddler2 and navigate through the site as normal.
Once you have done it you should easily be able to replicate the page calls required and anything Javascript may have done with minimal fuss and code.
I tend to use the below to download pages in a number of forms at once and it saves cookies for login sites etc:
function Download($href)
{
curl_setopt($this->ch, CURLOPT_COOKIEJAR, COOKIE_FILE); // Cookie management.
curl_setopt($this->ch, CURLOPT_COOKIEFILE, COOKIE_FILE);
curl_setopt($this->ch, CURLOPT_TIMEOUT, CURL_TIMEOUT); // Timeout
curl_setopt($this->ch, CURLOPT_USERAGENT, WEBBOT_NAME); // Webbot name
curl_setopt($this->ch, CURLOPT_VERBOSE, FALSE); // Minimize logs
curl_setopt($this->ch, CURLOPT_SSL_VERIFYPEER, FALSE); // No certificate
curl_setopt($this->ch, CURLOPT_FOLLOWLOCATION, TRUE); // Follow redirects
curl_setopt($this->ch, CURLOPT_MAXREDIRS, 4); // Limit redirections to four
curl_setopt($this->ch, CURLOPT_RETURNTRANSFER, TRUE); // Return in string
curl_setopt($this->ch, CURLOPT_URL, $href); // Target site
curl_setopt($this->ch, CURLOPT_REFERER, $href); // Referer value
curl_setopt($this->ch, CURLOPT_RETURNTRANSFER, true);
# Create return arrays
$return_array['FILE'] = curl_exec($this->ch);
$return_array['STATUS'] = curl_getinfo($this->ch);
$return_array['ERRORS'] = curl_error($this->ch);
$dom_document = new DOMDocument();
#$dom_document->loadHTML($return_array['FILE']);
$return_array['DOM'] = new DOMXpath($dom_document);
return $return_array;
}
This is my HttpHelper Class. Easy to make use of and its just Html:
<?php
class HttpHelper {
function __construct() {
//setcookie("UserPostcode","2065",time() + 3600);
$this->ch = curl_init();
define("WEBBOT_NAME", "Test Webbot");
# Length of time cURL will wait for a response (seconds)
define("CURL_TIMEOUT", 25);
# Location of your cookie file. (Must be fully resolved local address)
define("COOKIE_FILE", "cookie.txt");
# DEFINE METHOD CONSTANTS
define("HEAD", "HEAD");
define("GET", "GET");
define("POST", "POST");
# DEFINE HEADER INCLUSION
define("EXCL_HEAD", FALSE);
define("INCL_HEAD", TRUE);
$header = array();
$header[0] = "Accept: text/xml,application/xml,application/xhtml+xml,";
$header[0] .= "text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5";
$header[] = "Cache-Control: max-age=0";
$header[] = "Connection: keep-alive";
$header[] = "Keep-Alive: 300";
$header[] = "Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7";
$header[] = "Accept-Language: en-us,en;q=0.5";
$header[] = "Pragma: "; // browsers keep this blank.
curl_setopt($this->ch, CURLOPT_HTTPHEADER, $header); // Set Header Information
}
// Collects the HTML, Status, Errors and a DOM.
function Download($href)
{
curl_setopt($this->ch, CURLOPT_COOKIEJAR, COOKIE_FILE); // Cookie management.
curl_setopt($this->ch, CURLOPT_COOKIEFILE, COOKIE_FILE);
curl_setopt($this->ch, CURLOPT_TIMEOUT, CURL_TIMEOUT); // Timeout
curl_setopt($this->ch, CURLOPT_USERAGENT, WEBBOT_NAME); // Webbot name
curl_setopt($this->ch, CURLOPT_VERBOSE, FALSE); // Minimize logs
curl_setopt($this->ch, CURLOPT_SSL_VERIFYPEER, FALSE); // No certificate
curl_setopt($this->ch, CURLOPT_FOLLOWLOCATION, TRUE); // Follow redirects
curl_setopt($this->ch, CURLOPT_MAXREDIRS, 4); // Limit redirections to four
curl_setopt($this->ch, CURLOPT_RETURNTRANSFER, TRUE); // Return in string
curl_setopt($this->ch, CURLOPT_URL, $href); // Target site
curl_setopt($this->ch, CURLOPT_REFERER, $href); // Referer value
curl_setopt($this->ch, CURLOPT_RETURNTRANSFER, true);
# Create return arrays
$return_array['FILE'] = curl_exec($this->ch);
$return_array['STATUS'] = curl_getinfo($this->ch);
$return_array['ERRORS'] = curl_error($this->ch);
$dom_document = new DOMDocument();
#$dom_document->loadHTML($return_array['FILE']);
$return_array['DOM'] = new DOMXpath($dom_document);
return $return_array;
}
function http_post_form($target, $ref, $data_array)
{
return $this->http($target, $ref, $method="POST", $data_array, EXCL_HEAD);
}
function http_post_withheader($target, $ref, $data_array)
{
return http($target, $ref, $method="POST", $data_array, INCL_HEAD);
}
function http($target, $ref, $method, $data_array, $incl_head)
{
# Initialize PHP/CURL handle
$ch = curl_init();
# Prcess data, if presented
if(is_array($data_array))
{
# Convert data array into a query string (ie animal=dog&sport=baseball)
foreach ($data_array as $key => $value)
{
if(strlen(trim($value))>0)
$temp_string[] = $key . "=" . urlencode($value);
else
$temp_string[] = $key;
}
$query_string = join('&', $temp_string);
}else{
$query_string =$data_array;
}
# HEAD method configuration
if($method == HEAD)
{
curl_setopt($ch, CURLOPT_HEADER, TRUE); // No http head
curl_setopt($ch, CURLOPT_NOBODY, TRUE); // Return body
}
else
{
# GET method configuration
if($method == GET)
{
if(isset($query_string))
$target = $target . "?" . $query_string;
curl_setopt ($ch, CURLOPT_HTTPGET, TRUE);
curl_setopt ($ch, CURLOPT_POST, FALSE);
}
# POST method configuration
if($method == POST)
{
if(isset($query_string))
curl_setopt ($ch, CURLOPT_POSTFIELDS, $query_string);
curl_setopt ($ch, CURLOPT_POST, TRUE);
curl_setopt ($ch, CURLOPT_HTTPGET, FALSE);
}
curl_setopt($ch, CURLOPT_HEADER, $incl_head); // Include head as needed
curl_setopt($ch, CURLOPT_NOBODY, FALSE); // Return body
}
curl_setopt($ch, CURLOPT_COOKIEJAR, COOKIE_FILE); // Cookie management.
curl_setopt($ch, CURLOPT_COOKIEFILE, COOKIE_FILE);
curl_setopt($ch, CURLOPT_TIMEOUT, CURL_TIMEOUT); // Timeout
curl_setopt($ch, CURLOPT_USERAGENT, WEBBOT_NAME); // Webbot name
curl_setopt($ch, CURLOPT_URL, $target); // Target site
curl_setopt($ch, CURLOPT_REFERER, $ref); // Referer value
curl_setopt($ch, CURLOPT_VERBOSE, FALSE); // Minimize logs
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, FALSE); // No certificate
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, TRUE); // Follow redirects
curl_setopt($ch, CURLOPT_MAXREDIRS, 4); // Limit redirections to four
curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE); // Return in string
# Create return array
$return_array['FILE'] = curl_exec($ch);
$return_array['STATUS'] = curl_getinfo($ch);
$return_array['ERROR'] = curl_error($ch);
# Close PHP/CURL handle
curl_close($ch);
# Return results
return $return_array;
}
function InnerHtml($element)
{
$innerHTML = "";
if($element != NULL && $element->hasChildNodes())
{
$children = $element->childNodes;
foreach ($children as $child)
{
$tmp_dom = new DOMDocument();
$tmp_dom->appendChild($tmp_dom->importNode($child, true));
$innerHTML.=trim($tmp_dom->saveHTML());
}
}
return $innerHTML;
}
function Split($data, $split)
{
return explode($split, $data);
}
function correctImgUrls($html, $url)
{
$DOM = new DOMDocument;
$DOM->loadHTML($html);
$imgs = $DOM->getElementsByTagName('img');
foreach($imgs as $img){
$src = $img->getAttribute('src');
if(strpos($src, $url) !== 0){
$img->setAttribute('src', $url.$src);
}
}
$html = $DOM->saveHTML();
return $html;
}
function correctUrls($html, $url)
{
$DOM = new DOMDocument;
$DOM->loadHTML($html);
$imgs = $DOM->getElementsByTagName('a');
foreach($imgs as $img){
$src = $img->getAttribute('href');
if(strpos($src, $url) !== 0){
$img->setAttribute('a', $url.$src);
}
}
$html = $DOM->saveHTML();
return $html;
}
function removeHref($html)
{
$DOM = new DOMDocument;
$DOM->loadHTML($html);
$imgs = $DOM->getElementsByTagName('a');
foreach($imgs as $img){
$src = $img->getAttribute('href');
$img->setAttribute('href', "#");
}
$html = $DOM->saveHTML();
return $html;
}
function QuerySelector($dom, $xPath)
{
return $dom->query($xPath);
}
/*
function __destruct() {
# Close PHP/CURL handle
echo "Destruct Called..";
curl_close($ch);
}*/
}
?>
Simulate your login and do what you need to do: This is an example I use to login to my oDesk account and scrape job postings that I then email to myself :P
include("Business/Http/HttpHelper.php");
$bot = new HttpHelper;
//$download = $bot ->Download("https://www.odesk.com/login");
$data['username'] = "myusername";
$data['password'] = "myPassword";
$bot -> http_post_form("https://www.odesk.com/login", "https://www.odesk.com/login", $data);
You owe me!