React-Leaflet and Auto Snapshot/Save to PNG - next.js

I absolutely love react-leaflet! I'm using it to produce basemaps for weather alerts on a next.js driven site. My company requires that the server is on the Intranet only. Therefore, for employees outside of the company, I'm sending an e-mail for specific alerts and would like to include a PNG attached image of the map with the alert features. I've dived into leaflet.print and leaflet.image but all seem to require human interaction to press a button in order to save the image. I'd like this done automatically when the alerts are e-mailed to the recipients. I understand that having a View/DOM (Browser) might be necessary. However, for automation reasons, I'd probably want this to happen on the backend (node.js). Is there anything out there that could possibly do what I'm looking for or, is the need for a DOM/View going to make it difficult or impossible? Thanks in advance. I usually try to research as much as I can before asking a potentially repetitive question, but I'm stuck here

Related

How can I monitor a website to look for changes within a specific tag?

I will explain exactly what I am trying to do, and maybe someone can tell me a simple way that I can do it.
I want to track the amount of money pledged on a Kickstarter project page. The amount pledged is consistently kept within a certain tag. What are all the ways I can do this programatically?
I am just starting out to learn how to develop on the web, so that should be a good context to allow you to better help me. (I've learned bits and peices of C, Python, VB, JS, HTML/CSS)
Is there a simple hack way to do this with free tools? How would I do it all on my own? Extending this idea further, how would I notify my android device when the amount has surpassed a predefined threshold? Is this the process known as scraping? What tool do I need at my disposal to accomplish this? What language do I need to use? Do I need my own web space?
If I eventually made this concept into an android app, is there a way to only load a small portion of a website (maybe even just enough source to get to the tag I am looking for) so that I can get the data I want on the page but not have to waste a bunch of my smartphone data loading the rest of the stuff that I didn't want?
Thank you for any help you can provide!
I'm not familiar with Kickstarter's API -- do they have one? -- but here is how I'd approach this problem:
You want to "ping" the Kickstarter periodically for information. One way to do it on Android is using BuzzBox SDK
With each execution of the background task:
Load a portion of the Kickstarter page with jQuery into your own HTML document.
Compare it with a threshold and possibly the previous stored value. Should be doable with basic <= unless you want to go anal-retentive with parsing and stuff.
Use notification in Android to notify the user once the amount is updated.
Wrap all this into an app.

Where Does JQuery/Client-Side Programming Fit Into MVP and DDD

I'm working on an a pretty big project right now and am trying to implement an MVP architecture. I'm starting to run across a instances where I think JQuery or Javascript might be better suited than server-side code. I'm looking for feedback on how others are implementing client-side programming into their enterprise applications. How are you structuring the client-side code and how do you determine when to use it?
Things that can make user say "wow". For example - Populating search results while user has just typed 3-4 character of search term. Just go back in past and think about Yahoo or Hotmail which used to postback to server when you clicked on "Create Message". But when google came they just did on client side without going to server. I bet you would have said "wow" to that. At least I did.
Things that can reduce server load. For example - Adding extra data entry row in HTML table, instead of doing it through round trip, Increase/Decrease of quantity etc.
These are just some example to sight. Even to do these things properly you need to go to server but that will be behind the scene using ajax. Other than this you need to select few more jquery plugins that you will use in your project. To name some are jQuery UI, jQuery Validation, jQuery AnythingSlider etc. There are too many of them.
Http://ClearTrip.com is one site that I envy for their UX. Visit their site from mobile device and you will get further clues about their UX work. Besideds just coding you need to have a person in your team who can work on these UX aspects.
Regarding how this fits into DDD: I've just recently started my journey into DDD but one hears a lot about command/query separation in that circle. Certainly if you are doing something that hits your domain (like fetching for auto-completion or certainly if you allow partial page submission to accomplish a domain command) you have to decide how it gets there and how the domain is structured to handle it.
I think two decisions are most relevant.
First, bits entirely in the browser and even those specifically in your application layer are outside your domain and thus, though covered in the layered architecture part of the DDD discussion, do not land in the entity/value/event/service, etc. discussion. If, however, you are using AJAX to interact with your application layer and in turn need to access your domain, you need to consider again two things in my mind.
(a) Are you separating commands and queries simply using different methods on your domain? Fine if you have a relatively small demand for either queries or commands and this will not seem like "noise" in your domain API. Otherwise, you have a separate bounded context...another domain modeled just for queries that your UI needs to avoid clutter on your domain. Regardless, you are doing something like JS->AJAX handler in application layer->domain (including a domain service).
(b) Is this a command or a query? Once you have (a) figured out, this lets you know where the access will land...then use the presentation layer's use case to elaborate the domain concept and put it into your ubiquitous language.
Second, you have the DTO vs direct to domain decision. This can be a religious war gathering topic, but usually the answer is "depends." I think there are cases for using DTOs and cases for not (within the same architecture)...just search for all the discussions around the topic and apply the pattern only where it adds value; I won't try to cover details here.
Hope this provides some insight or at least conversation magnet to which others will add.
I guess this question is a little too subjective. Looks like I'm just going to grab a view books on advanced javascript and study up on the JQuery library.

Prevent automated tools from accessing the website

The data on our website can easily be scraped. How can we detect whether a human is viewing the site or a tool?
One way is by calculating time which a user stays on a page. I do not know how to implement that. Can anyone help to detect and prevent automated tools from scraping data from my website?
I used a security image in login section, but even then a human may log in and then use an automated tool. When the recaptcha image appears after a period of time the user may type the security image and again, use an automated tool to continue scraping data.
I developed a tool to scrape another site. So I only want to prevent this from happening to my site!
DON'T do it.
It's the web, you will not be able to stop someone from scraping data if they really want it. I've done it many, many times before and got around every restriction they put in place. In fact having a restriction in place motivates me further to try and get the data.
The more you restrict your system, the worse you'll make user experience for legitimate users. Just a bad idea.
It's the web. You need to assume that anything you put out there can be read by human or machine. Even if you can prevent it today, someone will figure out how to bypass it tomorrow. Captchas have been broken for some time now, and sooner or later, so will the alternatives.
However, here are some ideas for the time being.
And here are a few more.
and for my favorite. One clever site I've run across has a good one. It has a question like "On our "about us" page, what is the street name of our support office?" or something like that. It takes a human to find the "About Us" page (the link doesn't say "about us" it says something similar that a person would figure out, though) And then to find the support office address,(different than main corporate office and several others listed on the page) you have to look through several matches. Current computer technology wouldn't be able to figure it out any more than it can figure out true speech recognition or cognition.
a Google search for "Captcha alternatives" turns up quite a bit.
This cant be done without risking false positives (and annoying users).
How can we detect whether a human is viewing the site or a tool?
You cant. How would you handle tools parsing the page for a human, like screen readers and accessibility tools?
For example one way is by calculating the time up to which a user stays in page from which we can detect whether human intervention is involved. I do not know how to implement that but just thinking about this method. Can anyone help how to detect and prevent automated tools from scraping data from my website?
You wont detect automatic tools, only unusual behavior. And before you can define unusual behavior, you need to find what's usual. People view pages in different order, browser tabs allow them to do parallel tasks, etc.
I should make a note that if there's a will, then there is a way.
That being said, I thought about what you've asked previously and here are some simple things I came up with:
simple naive checks might be user-agent filtering and checking. You can find a list of common crawler user agents here: http://www.useragentstring.com/pages/Crawlerlist/
you can always display your data in flash, though I do not recommend it.
use a captcha
Other than that, I'm not really sure if there's anything else you can do but I would be interested in seeing the answers as well.
EDIT:
Google does something interesting where if you're looking for SSNs, after the 50th page or so, they will captcha. It begs the question to see whether or not you can intelligently time the amount a user spends on your page or if you want to introduce pagination into the equation, the time a user spends on one page.
Using the information that we previously assumed, it is possible to put a time limit before another HTTP request is sent. At that point, it might be beneficial to "randomly" generate a captcha. What I mean by this, is that maybe one HTTP request will go through fine, but the next one will require a captcha. You can switch those up as you please.
The scrappers steal the data from your website by parsing URLs and reading the source code of your page. Following steps can be taken to atleast making scraping a bit difficult if not impossible.
Ajax requests make it difficult to parse the data and require extra efforts in getting the URLs to be parsed.
Use cookie even for the normal pages which do not require any authentication, create cookies once the user visits the home page and then its required for all the inner pages.This makes scraping a bit difficult.
Display the encrypted code on the website and then decrypt it on the loadtime using javascript code. I have seen it on a couple of websites.
I guess the only good solution is to limit the rate that data can be accessed. It may not completely prevent scraping but at least you can limit the speed at which automated scraping tools will work, hopefully below a level that will discourage scraping the data.

How to prevent someone from hacking API feed?

I have started developing a webpage and recently hired someone to write code to display a customized feed (powered by API) in the middle panel on http://farmball.com/. Note that this is not the RSS feed tied to the site blog. The feed ties to my account on another site. There is no RSS link for an average user to subscribe to the feed. I've taken the site out of maintenance mode to ask anyone here with scraping/hacking experience how someone would most easily go about 'taking' the feed and displaying it on their own site. More importantly, what can I do to prevent it?
^Updated for re-wording
You can't.
If you are going to expose an RSS feed which you don't want others to be able to display on their site then you are completely missing the point of RSS. The entire reason for Really Simple Syndication (RSS) is to make your content externally consumable- whether that's in an RSS Reader or through someone simply printing its content on their own website.
Why are you including an RSS feed if you do not want someone to be able to consume it?
what can I do to prevent...'taking' the feed and displaying it on their own site?
Nothing. Preventing reuse goes against the basic concept of RSS, which is to make it as easy as possible for anyone to do anything they want with it. It was designed from the ground up to be Really Simple to Syndicate, not Really Hard to Retransmit Without Permission.
You could restrict access to the feed itself to trusted users only by making them provide some credentials or pass in a key to the feed (e.g. yoursite.rss?mykey=abc123). But you cannot control use. Only access.
Be explicit about your license. It isn't a technology solution, as others have mentioned, the technology is an open technology-- this isn't DRM! But if you ask in each post that people who use this feed to not repost/fail to give credit/etc then some people will respond to the request.
Otherwise, you're better off putting your content behind a password and using a paid subscription model for distributing your content.
This is a DRM problem essentially. If you had some technique that you could put content on the web without having it redistributable, the music industry would love you.
It is possible to try to prevent redistribution. One technique you could try is embedding a signature of some sort into the feed for each user who you require to sign up. If the content is found on the web, you can identify and ban the user who redistributed your content.
This is avoidable too, by getting multiple accounts and normalizing the content to remove fingerprints. For the would-be pirate, this requires more effort than they may be willing to put in. Your signature could be a unique whitespace pattern, tiny variances in the timestamps on posts, misplaced pixels in videos, or any other thing you can vary slightly without end users noticing.
use .htpassword
better yet, don't put something private in a public place where it's likely to get picked up by software automatically. Like others have said, it's a pretty odd question, if you're trying to figure something else out, you're better off being explicit with what you want to know.

What might my user have installed thats going to break my web app?

There are probably thousands of applications out there like 'Google Web Accelerator' and all kinds of popup blockers. Then theres header blocking personal firewalls, full site blockers, and paranoid cookie monsters.
Fortunately Web Accelerator is now defunct (I suggest you read the above article - its actually quite funny what issues it caused) but there are so many other plugins and third party apps out there that its impossible to test them all with your app until its out in the wild.
What I'm looking for is advice on the most important things to remember when writing a web-app (whatever technology) with respect to ensuring the user's environment isnt going to break it. Kind of like a checklist.
Whats the craziest thing you've experienced?
PS. I may have linked to net-nanny above, but I'm not trying to make a porn site
The best advice I can give is to program defensively. For example, don't assume that all of your scripts may be loaded. I've seen cases where AdBlocker Plus will block 1/10 scripts that are included in a page just because it has the word "ad" in the name or path. While you can work around this by renaming the file, it's still good to check that a particular object exists before using it.
The weirdest thing I've seen wasn't so much a browser plugin but a firewall/proxy configuration at a user's workplace. They were using a squid proxy that was trying to remove ads by replacing any image HTTP request that it thought was an ad with a single pixel GIF image. Unfortunately it did this for non-GIF images too so when our iPhone application was expecting a PNG image and got a GIF, it would crash.
Internet Explorer 6. :)
No, but seriously. Firefox plugins like noscript and greasemonkey for one, though those are likely to be a very small minority.
Sometimes the user's environment means a screen reader (or even a braille interface like this). If your layout is in any way critical to the content being delivered as intended, you've got a problem right there.
Web pages break, fact of life; the closer you have been coding and designing up against standards, the less your fault it is.
Something I have checked in the past is loading some of the more popular toolbars that people tend to install (Google, Yahoo, MSN, etc) and seeing how that affects the users experience.
To a certain extent it is difficult to preempt which of the products you mentioned will be used by your users since there are so many. I would say your best bet is to test for the most frequent products that your user base may employ and roll with the punches for the rest. If you have the time to test other possible scenarios, by all means do.
Also, making it easy for your users to report possible issues also helps lessen the time it takes to get a fix in place should it be something you can work around.

Resources