I'm creating symfony2 bundle which helps making request to some API.
When user not pass the token value (required in request) then I try to acquire this token and save it in my bundle directory (to read it later). But this path is not writable.
How can I handle simple data storage? Is my approach good or I miss something?
Should my bundle save such values in /vendor directory?
#edit: Asking user to make the directory writable is IMO bad solution
No. /vendor should not be writable by your application. You'll likely overwrite anything you save to /vendor the next time you perform a composer update or composer install.
It sounds like what you need is a persistent piece of configuration information that's outside the scope of your parameters.yml or config.yml (since you want to change it at runtime). Saving into a cache directory doesn't sound appropriate, so you're going to want to store it in some persistent location; probably your database, or a persistent key in redis or other similar storage.
If a cache directory is sufficient, though, you can get the location from the container's kernel.cache_dir parameter; but that will be erased each time the cache is cleared..
Related
How can Symfony deliver static files without bootstrapping/executing the framework?
For example: if some requests are failing by the webserver(images, js files are not found or something like this) then the framework tries to solve the route. Of course this does not exists.
Is there a way to avoid this or blacklist these extensions?
It could be a cache problem.
If it is :
If it is a cache problem, you could try to clear the cache on the symfony console with cache:clear. If it doesn't work you could try to remove the ressources in the general folder, leaving the original ones in your bundle, and running assetic:dump and assets:install.
If it isn't
Regarding the "remove-symfony-routing" thing, I don't know if it's possible, but it should not done anyways.
What you're asking is to be able to access, from the client side, any file on the server, which constitutes a major security breach.
This could allow the client to get any file on the server, meaning he could get his hands on your javascript or php files which most of the time contain valuable information (such as how your app works or even deadlier : global passwords and config values)
What you could do to access resources from the client would be a route that points to a controller function that could output to browser the file you're looking for, provided that it has an extension you'd be ok to share. For example you could allow any image file but forbid code files such as php or javascript.
EDIT: Or yeah, configure your webserver correctly. 2 simple answers while I was typing :D
We have a situation, where a node (It was a client lib folder) got deleted from AEM repository, not sure which user did this. I was looking, if AEM stores Node/Folder deletion history somewhere, so that we can identify, who had taken action of deleting the node.
Few options I tried/was thinking of
Tried to check logs, if there some info, but on creation or deletion of node, didn't see any logs with node name
Have a content change listener on repo, but that will load AEM un-necessarily. Also this will not give information on nodes which were deleted before listener was registered.
Is there a audit log or history stored for deleted nodes in AEM?
Yes AEM can store and provide audit log entries for WCM events like e.g. page modifications.
But it requires the audit logger to be enabled (through the configuration admin console /system/console/configMgr).
If this is the case then check either the audit.log file in your logs directory or the audit records below /var/audit
If it is a client lib folder that got deleted, then audit log won't help you much because it logs pages/dam creation/changes/deletion events.
You need to write your own listener for that, which will just make the repository grow.
I can only think of it happening on a dev like env as write access to /etc or /apps should be restricted on prod like envs.
Anyway to restore the content just reinstall the package through which the clientlib got installed.
I was looking to add a separate sub-directory to cache directory for one of the services.
What is a default cache path variable (%cache%) for symfony2?
Wasnt easy to find but its %kernel.cache_dir%
Suppose the URL http://example.com/test.php. If I type this URL on the browser address bar, the PHP code is executed, and its output is returned to me. Fine. But, what if instead of executing it, I wanted to view it's source as plain text. Is there a a way to issue such request?
I believe that there must be some way, and my concern is that some outsider could retrieve sensitive code, such as configurations file, by guessing it's location. For example, Joomla instalations have a configuration.php on it's root folder. If someone retrieves such file as plain text, then these database credentials have been seriously compromised. Obviously, this could be prevented with proper permissions, but it's just too common to just issue 0777 as everything permissions and forgetting about access denials.
For PHP: if properly configured, there is no way to download it. File permissions won't help either way, as the webserver needs to be able to read the files, and that's the one serving contents. However. a webserver can for instance be configured to serve them with x-httpd-php-source, or the PHP/webserver configuration may be broken. Which is why files which don't need direct access (db config, class definitions, etc.) should be outside the document root, so there is no way those files will get served by accident even when the webserver config is incorrect / failing. If your current hoster does not allow you to store files outside the document root, switch hosting a.s.a.p.
There is a way to issue such request that downloads the source code of http://example.com/test.php if the server is configured to provide a URL to do so. Usually it isn't, so usually there is no way to issue such a request.
What would be the best method to implement the following scenario:
The web site calls for a image gallery that has both private and public images to be stored. I've heard that you can either store them in a file hierarchy or a database. In a file hierarchy setup how would prevent direct access to the image. In a database setup access to the images would only be possible via the web page view. What would be a effective solution to pursue?
[Edit] Thanks all for the responses. I decided that the database route is the best option for this application since I do not have direct access to the server. Confined to a webroot folder. All the responses were most appreciated.
Having used both methods I'd say go with the database. If you store them on the filestore and they need protecting then you'd have to store them outside the web-root and then use a handler (like John mentions) to retrieve them, anyway. It's as easy to write a handler to stream them direct from database and you get a few advantages:
With database you don't need to worry about filestore permissions or generating unique filenames or folder hierarchies etc.
With database you can easily apply permissions and protection directly - no trying to work out who can view what based on paths etc.
With a database you can store the image and metadata all together - when you delete the metadata you delete the image - no possibility of orphaned records where you delete from database but not from filestore
Easier to back-up database and images and then restore
The disadvantage is that of performance, but you can use caching etc. to help with that. You can also use FILESTREAM storeage in SQL Server 2008 (and 05?) which means you get filesystem performance but via the DB:
"FILESTREAM integrates the SQL Server
Database Engine with an NTFS file
system by storing varbinary(max)
binary large object (BLOB) data as
files on the file system. Transact-SQL
statements can insert, update, query,
search, and back up FILESTREAM data.
Win32 file system interfaces provide
streaming access to the data.
FILESTREAM uses the NT system cache
for caching file data. This helps
reduce any effect that FILESTREAM data
might have on Database Engine
performance. The SQL Server buffer
pool is not used; therefore, this
memory is available for query
processing."
Using file hierarchy, you can put the files out of the website file folder, for example, suppose the web folder is c:/inetpub/wwwroot/somesite, put the file under c:/images/, so that the web users won't be able to access the image files. but you cannot use the direct link in your website neither, you need to create some procedure to read the file, return the stream.
personally I think it's better to put the file in the database, still create some procedure to retrieve the binary image data and return to wherever it needed.
In reality both scenarios are very similar, so it's up to you... Databases weren't designed to serve files, but if the size isn't really a concern for you, I don't see a problem with doing it.
To answer your question about direct access, you'd setup the file images the same way you would for the database: You'd use some sort of page (probably a .ashx handler) that serves the images, allowing you a layer of logic between the user and image to determine whether or not they should have access to it. The actual directory the images are located in would then need to either a) not be part of the directory structure in IIS or b) if it is part of IIS, only allow windows authenticated access, and only allow the account the application process is running under access to the directory.
If you're using IIS7, since .net jumps in the pipeline early I believe you can protect jpg files as well, just by using a role manager and applying roles to file system folders. If you're using IIS6, I've done something similar to the answer by John, where I store the actual file outside of the wwwroot, and use a handler to decide if the user has the correct credentials to view the image.
I would avoid the database unless you have a strong reason to do this - and I don't think a photo gallery is one of them.
Neither. Amazon S3 offers a very simple API for accepting uploads. You can use SimpleDB or your SQL database to track the URLs and permissions. Set the entire S3 bucket to private, and authenticate to it using your AWS key on the ASP.NET server.
Very little code is required to upload to S3, and very little more would be required to perform bookeeping in SQL.
Once they're in S3, grab the image resizer library and the S3 Reader plugin and you can have your entire system running in under an hour. And - it will scale properly. No disk or database space limits. Ever.
You can implement authorization using the AuthorizeImage event of the Image Resizer library. Just throw an AccessDeniedException if access isn't allowed for the current user.
If you want to tune performance a bit mare, add both the DiskCache and CloudFront plugins. CloudFront can edge-cache the public images (inexpensively), and DiskCache will handle the private images, serving them at static-file speeds.