I'm having trouble when I want to remove a permission from a single user, using openCMIS method Acl removeAcl(List removeAces, AclPropagation aclPropagation).
I have a document or folder several users with permission and I just want to remove the permission to single user.
This is the code I am using, to remove the user:
OperationContext operationContext = new OperationContextImpl();
operationContext.setIncludeAcls(true);
Folder testFolder = (Folder) session.getObject("72deb421-3b8e-4268-9987-9c59a19f4a13");
testFolder = (Folder) session.getObject(testDoc, operationContext);
List<String> permissions = new ArrayList<String>();
permissions.add("{http://www.alfresco.org/model/content/1.0}folder.Coordinator");
String principal = "peter.sts";
Ace aceIn = session.getObjectFactory().createAce(principal, permissions);
List<Ace> aceListIn = new ArrayList<Ace>();
aceListIn.add(aceIn);
testDoc.removeAcl(aceListIn, AclPropagation.REPOSITORYDETERMINED);
testDoc = (Folder) session.getObject(testDoc, operationContext);here
I have this user with this permission associated with a folder and want to remove, but only this user.
permissions.add("{http://www.alfresco.org/model/content/1.0}folder.Coordinator");
String principal = "peter.sts";
When I run the method, all users with permission associated with the folder are removed.
What am I doing wrong?
You don't need to create an instance of an ACE if you only need to remove an entry. Example:
public void doExample() {
OperationContext oc = new OperationContextImpl();
oc.setIncludeAcls(true);
Folder folder = (Folder) getSession().getObject("workspace://SpacesStore/5c8251c3-d309-4c88-a397-c408f4b34ed3", oc);
// grab the ACL
Acl acl = folder.getAcl();
// dump the entries to sysout
dumpAcl(acl);
// iterate over the ACL Entries, removing the one that matches the id we want to remove
List<Ace> aces = acl.getAces();
for (Ace ace : aces) {
if (ace.getPrincipalId().equals("tuser2")) {
aces.remove(ace);
}
}
// update the object ACL with the new list of ACL Entries
folder.setAcl(aces);
// refresh the object
folder.refresh();
// dump the acl to show the update
acl = folder.getAcl();
dumpAcl(acl);
}
public void dumpAcl(Acl acl) {
List<Ace> aces = acl.getAces();
for (Ace ace : aces) {
System.out.println(String.format("%s has %s access", ace.getPrincipalId(), ace.getPermissions()));
}
}
Running this against a folder that has three users, tuser1/2/3, each with collaborator access returns:
GROUP_EVERYONE has [{http://www.alfresco.org/model/content/1.0}cmobject.Consumer] access
tuser1 has [{http://www.alfresco.org/model/content/1.0}cmobject.Collaborator] access
tuser2 has [{http://www.alfresco.org/model/content/1.0}cmobject.Collaborator] access
tuser3 has [{http://www.alfresco.org/model/content/1.0}cmobject.Collaborator] access
GROUP_EVERYONE has [{http://www.alfresco.org/model/content/1.0}cmobject.Consumer] access
tuser1 has [{http://www.alfresco.org/model/content/1.0}cmobject.Collaborator] access
tuser3 has [{http://www.alfresco.org/model/content/1.0}cmobject.Collaborator] access
Related
I am new with terraform, but I have created an openstack compute instance like this:
provider "openstack" {
auth_url = "https://my-auth/v2.0/"
domain_name = "default"
alias = "alias"
user_name = "username"
tenant_name = "tenantname"
password = "pwd"
region = "region"
}
# Import SSH key pair into openstack project
resource "openstack_compute_keypair_v2" "keypair" {
provider = "myprovider"
name = "keypair"
public_key = "${file("~/.ssh/id_rsa.pub")}"
}
# Create a new virtual machine
resource "openstack_compute_instance_v2" "compute_instance" {
name = "compute_instance" # Instance Name
provider = "myprovider" # Instance distr
image_name = "Centos 7" # Image name
flavor_name = "b2-7" # Machine type name
key_pair = "${openstack_compute_keypair_v2.keypair.name}"
network {
name = "Ext-Net"
}
}
For maintainability and flexibility reasons I would like to add some "components" in the same instance, it could be anything, but here I have tried with a provisionner file and remote execution.
Indeed, when I add this arguments in my compute instance, I noticed that my compute instance will not be updated. For example:
provider "openstack" {
auth_url = "https://my-auth/v2.0/"
domain_name = "default"
alias = "alias"
user_name = "username"
tenant_name = "tenantname"
password = "pwd"
region = "region"
}
resource "openstack_compute_keypair_v2" "keypair" {
provider = "myprovider"
name = "keypair"
public_key = "${file("~/.ssh/id_rsa.pub")}"
}
resource "openstack_compute_instance_v2" "compute_instance" {
name = "compute_instance" # Instance Name
provider = "myprovider" # Instance distr
image_name = "Centos 7" # Image name
flavor_name = "b2-7" # Machine type name
key_pair = "${openstack_compute_keypair_v2.keypair.name}"
network {
name = "Ext-Net"
}
# Add a provisionner file on the ressource
provisioner "file" {
source = "foo_scripts/bar-setup.sh"
destination = "/tmp/bar-setup.sh"
connection {
type = "ssh"
user = "user"
private_key = "${file("~/.ssh/id_rsa")}"
}
}
# execute server setup file
provisioner "remote-exec" {
inline = [
"chmod +x /tmp/bar-setup.sh",
"sudo bash /tmp/bar-setup.sh",
]
connection {
type = "ssh"
user = "centos"
private_key = "${file("~/.ssh/id_rsa")}"
}
}
Indeed, by adding the provionner file on the ressource, when I run the command terraform plan or terraform apply, nothing change on my instance. I have infos messages notify me that:
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
What it's the right way to apply my changes to my compute instance.
Following Terraform documentation:
Provisioners are used to execute scripts on a local or remote machine as part of resource creation or destruction.
If you want the provisionners to run again, you should destroy (terraform destroy) and create (terraform apply) the resource again.
There's no way that Terraform can check the state of a local or a remote execution, it's not like there's an API call that can tell you what happen on your custom code - bar-setup.sh.
That would be like magic, or actual Magic.
Terraforms' for managing the infrastructure, the config of the instance, and not really for the content on the instance. Immutable content and recreating is the true path here. Making a completely new instance. However if it's your Hammer there are ways.
If you taint the resource that you want to update, then when terraform is run again next time the resource will be re-executed. But heed what I said about Hammers.
Alternatively you could leverage your CM tool of choice to manage the content of your instance - Chef/Ansible or create the images (i.e. immutable) used by Openstack via a tool like packer and update those. I'd do the latter.
It is fairly common to allow users to download a file via having some path modifier in the URL
//MVC Action to download the correct file From our Content directory
public ActionResult GetFile(string name) {
string path = this.Server.MapPath("~/Content/" + name);
byte[] file = System.IO.File.ReadAllBytes(path);
return this.File(file, "html/text");
}
quoted from http://hugoware.net/blog/dude-for-real-encrypt-your-web-config
An application I'm working with has liberal path downloads ( directory based ) sprinkled throughout the application, hence it is super vulnerable to requests like "http://localhost:1100/Home/GetFile?name=../web.config" or ( ..%2fweb.config )
Is there an easy way to restrict access to the config file - do I need to provide a custom Server.MapPath with whitelisted directories - or is there a better way.
How do you secure your file downloads - are path based downloads inherently insecure?
A simple option, assuming that all files in the ~/Content directory are safe to download would be to verify that the path is actually under (or in) the ~/Content directory and not up from it, as ~/Content/../web.config would be. I might do something like this:
// MVC Action to download the correct file From our Content directory
public ActionResult GetFile(string name) {
// Safe path
var safePath = this.Server.MapPath("~/Content");
// Requested path
string path = this.Server.MapPath("~/Content/" + name);
// Make sure requested path is safe
if (!path.StartsWith(safePath))
// NOT SAFE! Do something here, like show an error message
// Read file and return it
byte[] file = System.IO.File.ReadAllBytes(path);
return this.File(file, "html/text");
}
I'm trying to get files list from shared directory, under this directory there is subdirectories with the names of the person who logged in to the application, so for that i have to get automatically files according to this person
ex: shared directory Clients sub directory : Client1 (when he log in he gets a file's list located under the subdirectory Client1 )
Using System.IO;
String xmlPathName = Path.Combine(basePath, "Client1");
DirectoryInfo di = new DirectoryInfo(xmlPathName);
if (di.Exists)
{
foreach(FileInfo file in di.GetFiles("*.xml")
{
String fileName = file.Name;
String fileFullName = file.FullName;
//add some code for each file ...
}
}
I call this function from with my code-behind:
DeleteFile(Server.MapPath("/") + "sitemap_index.xml")
Public Shared Function DeleteFile(ByVal filename As String) As Boolean
'deletes file from server
Dim bResult As Boolean = False
Try
If File.Exists(filename) Then
'delete file
File.Delete(filename)
bResult = True
Else
bResult = True
End If
Catch ex As Exception
End Try
Return bResult
End Function
I then get the error: Access to the path 'E:\zz\wwwroot\sitemap_index.xml' is denied.
In other sites of myself this logic works great, however on the current site it doesnt. I checked the security settings on my windows server 2008 R2 Standard.
See here the settings I have on my Windows server on folder wwwroot:
SYSTEM: Full Control
NETWORK SERVICE: Read + Write + Read & Execute + List folder contents
IIS_IUSRS: Read + Write
As was suggested by other posts I've been reading I tried adding other user groups, but I have no ASPNET service/group on my server.
When logged in as administrator (Forms authentication) I can click a button to recreate the sitemap_index.xml and sitemaps.xml
Users should be able to delete and add images to the wwwroot\images\uploads folder
Which group should I give what permissions to allow the above to be possible AND secure?
Check the access for the Application Pool user.
Find the application pool that your site is using, right click on it and choose Advanced Settings.... The name of the user that the pool is using is listed next to Identity.
Note that if the identity says "ApplicationPoolIdentity", you should check the access for the user IIS AppPool\<Name of the app pool here> Info about ApplicationPoolIdentity
It looks like Modify permissions are required to delete files. Try granting NetworkService Modify permissions.
I had this problem too. When I applied the codes in this way, I did not encounter any permission requests.
public void DeleteDirectory(string target_dir)
{
string[] files = Directory.GetFiles(target_dir);
string[] dirs = Directory.GetDirectories(target_dir);
foreach (string file in files)
{
System.IO.File.SetAttributes(file, FileAttributes.Normal);
System.IO.File.Delete(file);
}
foreach (string dir in dirs)
{
DeleteDirectory(dir);
}
Directory.Delete(target_dir, false);
}
System.IO Exception: Logon failure: unknown user name or bad password.
1 minute ago | LINK
Hi All I am trying to resolve this issue with all possible solutions but could not succeed.
Requirement - I should be able to access XML file located in network share share folder for validation of users and other purposes.
Problem: I am able to access the XML file located in Network Share folder when debugging using VS 2010 but not when i published to IIS 7.
Methods Approached: I created a user account XXX and with password and made the user part of Administrators group. Set the website application pool identity to custome user account(XXX) created.
In the web.config I added a line:
<identity impersonate="true" userName="XXX" password="XXXXX"/>
Code where exception is caught-
string UserConfigXML ="\\\\servername\\Engineering\\Kiosk Back Up\\UserCFG.XML";
reader = new StreamReader(UserConfigXML);
string input = null;
string[] sArray;
while ((input = reader.ReadLine().Trim()) != "</USERS>")
{
if (input.Contains("<USER NAME="))
{
sArray = input.Split(new Char[] { '"' });
string sUserName = sArray[1].ToString().ToUpper();
string sDelivery = "";
while ((input = reader.ReadLine().Trim()) != ("</USER>"))
{
char[] array2 = new char[] { '<', '>' };
if (input.Contains("<DELIVERY_MECHANISM>"))
{
string[] mechanism = input.Split(array2);
sDelivery = mechanism[2].ToString().ToUpper();
if (sDelivery == "WEBMAIL")
{
UsersList.Add(sUserName);
}
}
}
}
}
return UsersList;
Any ideas how to resolve the issue?.
I propose 3 fixes for 2 different scenarios:
If you have both computers (server & computer holding the xml) hooked up using domain authentication: create a domain user and give it rights to access that file in the computer holding the xml.
Any other situation than the one mentioned above: create a user with the same name and password on both computers and set that as the one impersonated by the application pool.
(UNSECURE) Works in any scenario, without impersonation: put the XMLs in a network share that allows anonymous access.