I'd like to know if it is possibile using waze's API about the community like danger points for an external app and integrated them with some our features.
I'm afraid Waze has strict rules concerning to who they expose the reports sent in by users. They provide a feed of reports for specific areas through the Waze Connected Citizens Program (CCP), but this program is generally only available for governmental institutions or road authorities. Generally the program also expects a dataflow towards Waze (road closures, for example).
That said, I don't work for Waze and I don't know all the details of the app you are working on. Depending on the type of app you are talking about, they may like to collaborate. You could always try to apply as a CCP partner and perhaps they may make an exception. Just be aware that while Waze has a lot of users, it is actually not a very big company so support queries may take a while to get a reply.
Note that it is technically possible to scrape the information from the Waze Live Map, but I'd strongly advice against doing that without permission as it could lead to legal actions.
Related
After some experimenting, I noticed it is possible to send events directly to a server container via HTTP request instead of pushing to the data layer (which is connected to a web container). A big advantage of this setup is that the front-end doesn't need to load any GTM script. Yet, I have some doubts because I don't find much documentation about this setup. This setup also brings some challenges like implementing automatically collected events (e.g. page_view). Does anyone have experience with this setup or is able to tell me why I shouldn't be following this path?
Regards, Thomas
This is definitely not a best practice, although this is actually a technically more beneficial path since... A few things, actually:
Can make your tracking completely immune to adblockers.
Has the potential to protect from malicious analytics spam, also makes it way harder for third parties to spoil your data.
Doesn't surface your analytics stack and libraries to the public.
Is typically way lighter than the GTM lib.
You have a much better degree of control about what happens and have much more power over the tracking.
But this is only if you have the competency to develop it, which is a rarity, actually. Normally web-developers don't know analytics well enough to make it work well while analytics developers lack the technical knowledge. You now suddenly can't just hire a junior or mid implementation expert to help with the tracking. A lot of those who call themselves seniors wouldn't be able to maintain raw JS tracking libraries either.
As you've mentioned, you won't be able to rely on automatic tracking from GTM or gtag libraries. And not having automatic events is actually not the issue. The more important thing is manually collecting all dimensions, including the proper maintenance of client ids and session ids.
Once your front-end is ready, it's important to note that you don't want to expose your server-side GTM's endpoint. I mean, you can, but this would defeat the purpose significantly. You want to make a mirror on your backend that would reroute the events to the sGTM.
Finally, you may want to make up some kind of data encryption/protection/validation/authentication logic on your mirror for the data. You may consider it just because without surfacing the endpoints, you're now able to further conceal what you're doing thus avoiding much of potential data tampering. This won't make it impossible to look into what you're doing, of course, but it will make nearly impossible any casual interference.
In the end, people don't do it because this would effectively double the monetary cost of tracking since sufficient experts would charge approximately double from what regular analytics folks charge. However, the clarity of data will only grow about 10-20%. Such an exchange generally doesn't make business sense unless you're a huge corporation for which even enterprise analytics solutions like Adobe Analytics is not good enough. Amazon would probably be a good example.
Also, If you're already redefining users and sessions, you're not that far from using something like Segment for tracking and then ETLing all that into a data warehouse and use a proper BI tool for further analysis. And now is there still sense in having the sGTM at all if you can just stream your events to Segment realtime from your mirror, and then it can seamlessly re-integrate this data into GA, Firebase, AA, Snowflake, Facebook and tens if not hundreds more destinations, and this all server-side.
You want to know where to stop, and the best way to do it is by assessing the depth of the analysis/data science your company is conducting on the user behavioral data. And in 99% of cases, it's not deep enough to even consider sGTM.
In response to #BNazaruk
So it's been a while now… I've been looking into the setup, because it’s just way too cool. I also took a deeper dive into CGTM to better understand the benefits of SGTM. And honestly, everything that has the probability to replace CGTM should be considered. My main reasons are;
Cybersecurity - Through injection it is possible to insert malicious software like keyloggers. The only thing that withholds this, are the login details to CGTM. These are, relatively speaking easy to get with targeted phishing.
Speed - A CGTM setup, with about 10 - 15 tags, means an avg performance loss of 40 points in Lighthouse.
Quality - Like you said; because browser restrictions like cookie policies and ad blockers that intercept/manipulate/block CGTM signals: On avg. 10-20% of the events are not registered in proper fashion.
Mistakes - Developing code outside a proper dev process, limits the insight into the impact of the code with possible errors or performance loss as a result.
So far I have created a standardized setup (container templates, measurement plans, libraries) for online marketers and developers to use. Within the setup, we maintain our own client and session ID’s. Developers are able to make optimal use of SGTM and increase productivity drastically. The only downside to the setup is that we still use CGTM to implement page_view and exceptions. Which is a shame, because I’m not far away from a full server-to-server setup. Companies are still too skeptical to fully commit to SGTM I guess. Though, my feeling says that in 5 years time, high-end apps won't use CGTM anymore.
Once again, thanks for your answer, it’s been an important part of my journey.
I am working on a small project trying to control some steps of a workflow in a web application using MS teams. My idea is to use R as an intermediate step between the application (which has a number of API endpoints I can call from R) and Microsoft Teams chats (or channels). Users would then use a set of keywords in the chat to lead to an action in the application. For example they might use "publish ABC-123" in a specific chat and this would lead to the application publishing document ABC-123 somewhere via R which would orchestrate everything.
I have a couple of ideas but there are drawbacks:
I thought originally about using microsoft365r. We have an app registered in Microsoft 365 which would allow us to monitor a specific chat for messages that trigger actions in R. The problem with this approach is that we would need to have the R code running and checking MS Teams every couple of minutes. It is certainly doable, but not very elegant.
Another option could be setting up a plumber API and an outgoing webhook in MS Teams. This seems like the ideal way to do it, but webhooks in MS Teams require https and as far as I understand this is not straightforward to implement in plumber.
I would appreciate any ideas on how to do this. I know I am not very specific, but mostly looking for high level pointers of what I could look at. Many thanks!
You actually have a bunch of options for this:
Create a bot directly in code, e.g. per https://learn.microsoft.com/en-us/microsoftteams/platform/bots/what-are-bots . There's a bit of a learning curve of course, and it depends on whether you have development skills outside of r, e.g. python, .net, whatever. The bot would then call your code as needed.
Create a no-code bot using Power Virtual Agents. This is the equivalent, for bots, of Power Apps or Power Automate, if you're familiar with those.
Create a workflow, either in Power Automate or Azure Logic Apps, that can listen for and respond to messages. This is kind of similar to a bot, but with finer scope (and therefore less capability). If you want it to call out to your app, e.g. to an endpoint, you'd need a Premium Connecter for Power Automate, or you can use an Azure Logic App directly (uses the same engine, but the pricing model is different for these and Power Automate is a little easier to work with.
Outgoing webhook - you can implement these as standalone, but actually from your use case it sounds like a bot would be better anyway, and it's kind of what you need to build to make this kind of webhook work properly anyway.
I'm thinking of using an MBaaS such as Firebase or Kinvey for my next app, and am wondering if any exist which encrypt application data end-to-end (i.e. such that the encryption keys are never shared with the service provider). This seems feasible in theory, since the server is not expected to do any computation on the data, only store it and deliver it to clients.
Does such a service exist? I've found ZeroDB and Crypton, but neither are available as services AFAICT, which means I'd have to administer, scale, and back them up myself. I also thought of using something like Firebase and encrypting my app's data before I pass it to the Firebase API, but I'm wary of writing a one-off crypto layer like that unless I have to (i.e. I'd rather use something that's been peer-reviewed).
Alternatively, if no such service currently exists, why not? Is it technically infeasible, or is there just no market for it?
Edit: This seems closest to what I'm looking for, but considering the broken links on their website I'm guessing it's defunct: Adreneline Mobility
The answer to your question is actually available on the market. CloudMine offers end-to-end encryption (disclosure - I work at CloudMine). They have a largely healthcare focused offering so it has to stand up to HIPAA and other government regs around data security.
Here's a good overview video on security featuring CloudMine's CTO. The first 45 sec. provide some more information on our encryption techniques.
I know I'm being the "sales guy" right now but I'm happy to hop on a call to share what we've built and discuss your specific use case. You can email me at nick at cloudmineinc.com if you're interested.
Virgil Security (full disclosure - I work there) has an end-to-end encryption SDK that works for any endpoint, and also has a special integration with Firebase. It's open source, of course. Check it out and feel free to ask any questions of the team here or on Slack - https://e3kit.readme.io/
I've been reading about Firebase and playing with it for a short while. The idea (BAAS) and implementation are impressive, and having programmed with Javascript it seems a viable choice. Not having to deal with scaling and other server side concerns makes it even more attractive.
My question is: generally speaking, is Firebase a first class back-end candidate for any average data-based application? e.g. billing, CRM, e-commerce, social, location based, etc. I do not include super light or heavy extremes such as a basic chat, or a nuclear plant monitor...
The answer may not be a clear yes/no, but was it built to support the general application space, or just stand out as a real-time read/write data service?
Would appreciate answers based on experience and existing production applications.
Thanks
Yes, Firebase is intended to be a first class back-end for any data based Web, iOS or Android application. The service offers real-time data reads and writes, but also comes with a powerful and flexible security system that allows you to write secure client-only apps, without needing any server code to enforce data boundaries.
There are several apps in production listed on the front page as customer and on the app showcase page on https://firebase.google.com/customers/
Firebase is now more capable and is considered as a full stand-alone back-end, especially after the introduction of cloud function. https://firebase.google.com/docs/functions/
Firebase may not have support for transaction spanning multiple business objects.
e.g. When a sales order is booked then it needs to update inventory for multiple items, update billing in receivables, give sales credit to multiple sales persons etc.
Firebase team is supposed to come up with a database trigger option which will make all these happen.
I'm using this link https://www.google.com/reader/api/0/stream/contents/feed/FEEDHERE?output=json&n=20
to fetch feeds using Google's algorithm. As you can see I'm not adding any other parameters, just fetching the returned data in JSON format. My app will be heavily used hopefully and if I send a lot of requests to this link, will Google block my access or something?
Is there anything I can include, like userip, url for my app (so if they have problem to just contact me) or something else?
The most basic answer to your question is that Google will change its Terms of Service whenever it likes, and you've got no say in the matter. So if it's allowed today, it might not be allowed tomorrow, at Google's whim.
On this issue, though, you seem fairly safe. From the Terms of Service (these is the general document, since Reader doesn't seem to have a specific one):
Don’t misuse our Services. For example, don’t interfere with our Services or try to access them using a method other than the interface and the instructions that we provide.
Google provides RSS and Atom. They provide these feeds, so I assume they expect that they'll be used. They don't say that it's a misuse to point someone else at those feeds, so it looks OK for now, but they could add such a clause at any time.
All online services are subject to the terms and conditions of the providers of those services. So, as others have said, they may be ok with your use today, but they can change their mind any time down the line. I doubt including a URL or email or contact info will help anything, because when these services change, they don't notify every user of the service, they just announce the change publicly, and usually they give several month's notice in order to give users a chance to adapt their applications, but this is not standardized or enforced so there is no guarantee. One example would be the fairly recent discontinuance of the Google Finance API (for which no replacement has been announced).
The safest approach would be to design your app such that this feature that uses google's functionality is decoupled as much as possible from the rest of your app, so that, when or if the availability of the service changes (ie it's no longer available at all) you can adapt your app to use some other source for the feeds with minimal impact to the rest of the app. Design for change and plan for the worst.