Can Owncloud 9's tags be read with webdav? - webdav

From webdav, we would like to read the tags put on a file in the Owncloud 9 web interface
Secondary, from webdav, we would also like to search via tags, just like it is possible in the web interface.
Has anyone seen if this is possible?
We cannot find such functionality in the mobile api's either
Thanks!
/Aksel

Related

How can I host a website and web application on the same server using AWS?

Excuse my lack of server architecture knowledge, but I'm a bit confused on what applications, servers, environments, etc.. are and how they can communicate with each other. I just got AWS and here is what I want to do ultimately.
I want to create a Google Chrome extension. For simplicity, lets say that I'm trying to make an app that records the number of times that all users with the extension collectively visit a given webpage plus information about the visits, such as the time they visited and duration. So if I go to Facebook.com and 100 other people with the extension did, I would see an iframe, lets say, that says "100 users have been here and they visited at these times: ...". Of course, the extension also needs to communicate with the server to increase the count by one. The point is, there is no need to visit any webpage for this app to work since it's an extension and the point isn't to go to a webpage, although it still returns HTML and Javascript.
Now, I also want a homepage for the app in case people are interested in the extension for whatever reason. Just like Adblock, you don't need to go to their actual website, but it's good to have one.
My question is, how do I set this up? Do I just have a normal website, ie. www.example.com/ and set it up normally with Wordpress (what I'd like to use) then just designate one address, ie www.example.com/app, to be answered by my Python app? If so, how do I do that? What do I need in AWS? I'm familiar with Flask and have written apps on my local server using it—can that be integrated with Wordpress?
Sorry if this is confusing.
I also want a homepage for the app in case people are interested in
the extension
The simplest is to host the home page as a static website (Html, css, js) in an S3 bucket.
But if you really want WordPress, you can do that too.
For Backend web services for your plugin to talk to, you can use Elastic Beanstalk, it is a very simple way to do that, without tinkering all the components yourself.

Hosting images on Dropbox

I'm looking for a server for hosting images from a webservice that i'm working on. This webservice will need to access the images many times, I'll upload about 4GB of images per day to show to the users. My idea is to host the images over there and get the public links to put on the HTML.
So I'd like to know if Dropbox is an adequate tool for this, because I was studying the Dropbox API and I think It doesn't offer adequate tools to get the images's public link.
Summarizing my question, are these kind of hosts for this kind of services or not?
As the other comments and answers say, a normal CDN is probably a better choice for this.
For reference though, the Dropbox API does let you get publicly shareable links to files:
https://www.dropbox.com/developers/core/docs#shares
You can also modify these links as desired:
https://www.dropbox.com/help/201
There's also a direct and temporary version:
https://www.dropbox.com/developers/core/docs#media
Note however that there are bandwidth limits on these links:
https://www.dropbox.com/help/4204
Also, the API enables you to access file content directly:
https://www.dropbox.com/developers/core/docs#files-GET
Try amazon web services http://aws.amazon.com/s3/
Lots of big websites use this for serving images. It is very fast!

whitelisting to stop undesireable bots using IIS

Basically I want to do this in IIS:
In Apache you can block many bots by simply changing your .htaccess files to OPT-IN instead of OPT-OUT, basically whitelisting instead of blacklisting. You let in Google, Yahoo, MSN, etc. and IE, Opera, Firefox, Netscape and bounce EVERYTHING else by default. The beauty here is you don't have to keep looking for bots anymore as anything that identifies itself as a bot will be bounced.
How do I achieve that in IIS? Can you please point me to an example? Thanks!
references: http://www.spanishseo.org/how-to-identify-user-agents-and-ip-addresses-for-bot-blocking
http://incredibill.blogspot.com/2011/05/whitelisting-not-blacklisting-to-stop.html
There's no native way of doing this in IIS. If you're using asp.net it's easy enough to create an httpmodule to do this filtering, although unless we're talking IIS7 then only .net requests will be filtered.
Outside of that, you're looking at an IIS Filter, written in something like C++ or Delphi or something that can compile a dll. They're not easy to write either.
I wrote something similar that uses Project Honeypot (http://projecthoneypot.org/) to block spammy IP addresses. You can get it here: http://code.google.com/p/blacklistprotector/

How to index a web site

I'm asking on behalf of somebody, so I don't have too many details.
What options are available for indexing site content in an ASP.NET web site? I suspect SQL Server's Full Text index may be used if the page content is stored in the database. How would I index dynamic and static content if that content isn't stored in the DB, but in html and aspx pages themselves?
We purchased Karamasoft Ultimate Search several years ago. It is a search engine add-on for your web site. I like it because it is a simple tool that taught us searching on our site. It is pretty inexpensive and we knew we could buy later if we needed more or different features. We needed something that would give us searching without having to do a lot of programming.
Specifically, this tool is a web crawler. It will run on your web server and it will act like an end-user and navigate through your site keeping a record of your web pages, so when a real users searches, they are told the pages that have the content they want.
Keep that in mind it is acting like an end-user, so your dynamic data is indexed right along with the static stuff because it indexes the final web page. We needed this feature and it is what appealed to us the most.
You can use a web crawler to crawl that site and add the content to a database which then is full text indexed. There are a number of web crawlers out there.
Lucene is a well known open source tool that would help you here. The main branch is Java based but there is a .Net port too.
Main site: http://lucene.apache.org/
.Net port: http://incubator.apache.org/lucene.net/
Having used several alternatives I would be loath to do anything other than Google Site Search.
The only reason I use SQL Full Text Search is to search through multiple columns. It's really hard to implement it in any effective manner.

ASP.Net reverse proxy, what to do with external resources?

I'm currently working on a concept for a reverse proxy to basically relay responses and requests between the user and an otherwise invisible website. So basically the user goes to a site, let's say www.myproxyportal.com, where it is possible to access a website (in an iframe) in the webserver's intranet which isn't made public (for example internal.myproxyportal.com).
I've been working on a solution where I translate request objects to the desired location and return that response to the website. Works great, except for stuff like CSS links, IMG's, etc. I can do the translation of course, but then the link would go to internal.myproxyportal.com/css/style.css and this will never work from the outside.
How to approach such a thing?
Are there any out of the box solutions maybe?
EDIT: I found this, which is very similar to what I have written so far, but it also lacks support for external images, css, javascript, etc.
You can change settings in IIS to route all requests through ASP.NET pipeline, not just .aspx pages. Then simply create an HttpHandler to handle those in your proxy.
By default, IIS doesn't run "static" content requests through ASP.NET engine.
Apache has a pretty slick reverse proxy built-in, I use it extensively.
See more here: http://www.apachetutor.org/admin/reverseproxies

Resources