Hi I am new to Laravel and working on laravel 5.3. I have a session array I can see data is in session for the first time but when I refresh the page session get destroy or when i move to another page session destroy.
$userData = array(
'userId' => $result[0]->id,
'username' => $result[0]->name,
'email' => $result[0]->email
);
$request->session()->set('userlogin', $userData);
print_r($request->session()->all());
Now when I print_r the session I can see the values in session and then when i print_r the session->all() in another controller of same controller the session is not more. In simple words session is saving and destroy automatically.
Please help me out I have wasted 2 days can't figure out the problem.
sorry for bad english firt time on stackoverflow
config file is overhere
<?php
return [
/*
|--------------------------------------------------------------------------
| Default Session Driver
|--------------------------------------------------------------------------
|
| This option controls the default session "driver" that will be used on
| requests. By default, we will use the lightweight native driver but
| you may specify any of the other wonderful drivers provided here.
|
| Supported: "file", "cookie", "database", "apc",
| "memcached", "redis", "array"
|
*/
'driver' => env('SESSION_DRIVER', 'file'),
/*
|--------------------------------------------------------------------------
| Session Lifetime
|--------------------------------------------------------------------------
|
| Here you may specify the number of minutes that you wish the session
| to be allowed to remain idle before it expires. If you want them
| to immediately expire on the browser closing, set that option.
|
*/
'lifetime' => 120,
'expire_on_close' => false,
/*
|--------------------------------------------------------------------------
| Session Encryption
|--------------------------------------------------------------------------
|
| This option allows you to easily specify that all of your session data
| should be encrypted before it is stored. All encryption will be run
| automatically by Laravel and you can use the Session like normal.
|
*/
'encrypt' => false,
/*
|--------------------------------------------------------------------------
| Session File Location
|--------------------------------------------------------------------------
|
| When using the native session driver, we need a location where session
| files may be stored. A default has been set for you but a different
| location may be specified. This is only needed for file sessions.
|
*/
'files' => storage_path('framework/sessions'),
/*
|--------------------------------------------------------------------------
| Session Database Connection
|--------------------------------------------------------------------------
|
| When using the "database" or "redis" session drivers, you may specify a
| connection that should be used to manage these sessions. This should
| correspond to a connection in your database configuration options.
|
*/
'connection' => null,
/*
|--------------------------------------------------------------------------
| Session Database Table
|--------------------------------------------------------------------------
|
| When using the "database" session driver, you may specify the table we
| should use to manage the sessions. Of course, a sensible default is
| provided for you; however, you are free to change this as needed.
|
*/
'table' => 'sessions',
/*
|--------------------------------------------------------------------------
| Session Cache Store
|--------------------------------------------------------------------------
|
| When using the "apc" or "memcached" session drivers, you may specify a
| cache store that should be used for these sessions. This value must
| correspond with one of the application's configured cache stores.
|
*/
'store' => null,
/*
|--------------------------------------------------------------------------
| Session Sweeping Lottery
|--------------------------------------------------------------------------
|
| Some session drivers must manually sweep their storage location to get
| rid of old sessions from storage. Here are the chances that it will
| happen on a given request. By default, the odds are 2 out of 100.
|
*/
'lottery' => [2, 100],
/*
|--------------------------------------------------------------------------
| Session Cookie Name
|--------------------------------------------------------------------------
|
| Here you may change the name of the cookie used to identify a session
| instance by ID. The name specified here will get used every time a
| new session cookie is created by the framework for every driver.
|
*/
'cookie' => 'laravel_session',
/*
|--------------------------------------------------------------------------
| Session Cookie Path
|--------------------------------------------------------------------------
|
| The session cookie path determines the path for which the cookie will
| be regarded as available. Typically, this will be the root path of
| your application but you are free to change this when necessary.
|
*/
'path' => '/',
/*
|--------------------------------------------------------------------------
| Session Cookie Domain
|--------------------------------------------------------------------------
|
| Here you may change the domain of the cookie used to identify a session
| in your application. This will determine which domains the cookie is
| available to in your application. A sensible default has been set.
|
*/
'domain' => env('SESSION_DOMAIN', null),
/*
|--------------------------------------------------------------------------
| HTTPS Only Cookies
|--------------------------------------------------------------------------
|
| By setting this option to true, session cookies will only be sent back
| to the server if the browser has a HTTPS connection. This will keep
| the cookie from being sent to you if it can not be done securely.
|
*/
'secure' => env('SESSION_SECURE_COOKIE', false),
/*
|--------------------------------------------------------------------------
| HTTP Access Only
|--------------------------------------------------------------------------
|
| Setting this value to true will prevent JavaScript from accessing the
| value of the cookie and the cookie will only be accessible through
| the HTTP protocol. You are free to modify this option if needed.
|
*/
'http_only' => true,
];
Related
I have two collections: an org and a user. A user can be a regular user of org A but can also be an admin of org B. So the user collection would look something like this:
{
email: "john#example.com",
name: "John Doe",
access: [
{
org: "orgA",
role: "user"
},
{
org: "orgB",
role: "admin"
}
]}
The problem with keeping everything in the same collection is that I do not like admins of org A to update the access array and impact org B. If I move the access array in a sub-collection under the /user collection when showing the list of users for each collection, I'd have to make a call for each user to get the access info. Should I save the user IDs in an array in a sub-collection under the /org collection?
I guess my goal is to find a best practice solution for this problem.
The simplest database structure I can think of would be:
Firestore-root
|
--- users (collection)
| |
| --- $uid (document)
| |
| --- email: "john#example.com"
| |
| --- name: "John Doe"
| |
| --- userOf (map)
| | |
| | --- orgA: true
| |
| --- adminOf (map)
| |
| --- orgB: true
|
--- organizations (collection)
|
--- $orgA (document)
| |
| --- users: ["uidOne", "uidTwo"] (array)
|
--- $orgB (document)
|
--- admins: ["uidThree", "uidFour"] (array)
In this way, you can simply query the "users" collection to get regular users of some organization, as well as admin, or even both.
You have two approaches, one is insert a key of access in user, and the same in access. Otherwise you can use a Junction table, with the id of both. There is no exact answer, with the right safety rules and for performance is practically the same. Then decide based on the approach you feel is most appropriate for you design.
I am developing a service that requires access to a DynamoDB table which must be managed by authorizing user access to the table. Account management is handled by Cognito. I am currently investigating direct access to the DynamoDB table with read/write access limited based on User Groups with associated IAM policies.
Multiple organisations exist within the table, and multiple users are tied to an organisation. An example of the model is below. I also store sector and department information in a many-to-one relationship.
The Cognito Sub for a user is stored as their user id within the database under USR#.
+-------+-------+-----------------+------------+--------+
| PK | SK | Name | GSI1PK | GSI2PK |
+-------+-------+-----------------+------------+--------+
| ORG#1 | ORG#1 | Acme Inc | | |
| ORG#1 | USR#1 | John Doe | | |
| ORG#2 | ORG#2 | Globetex | | |
| ORG#2 | USR#2 | Jane Doe | | |
| ORG#1 | SEC#1 | Sector A1 | ORG#1SEC#1 | SEC#1 |
| DEP#1 | DEP#1 | Human Resources | ORG#1SEC#1 | DEP#1 |
+-------+-------+-----------------+------------+--------+
So far I can limit access in a hardcoded manner to each organisation in a specific IAM policy. However, this is not scalable. If a hundred organisations were to exist, a hundred user groups must also exist with a separate policy. An example of this policy is below.
Is there any way to create an IAM policy that utilises a custom Cognito variable, such as 'organization' that would allow me to create a single policy that limits access to only rows leading with that organization? I am unable to get this working with the below code.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"dynamodb:GetItem",
"dynamodb:PutItem",
"dynamodb:Query"
],
"Resource": [
"arn:aws:dynamodb:region:id:table/TableName"
],
"Condition": {
"ForAllValues:StringEquals": {
"dynamodb:LeadingKeys": [
"${cognito-identity.amazonaws.com:org}"
]
}
}
}
]
}
Edit: For clarity, my query is to insert a custom Cognito variable dynamically into the IAM policy at validation.
For instance, User A has custom:org = Acme as a Cognito attribute and User B has custom:org = Globex as their custom Cognito attribute.
A single policy as detailed in the code above can insert this attribute directly into the policy, so one policy may be used for multiple users in separate orgs.
After further research I am unsure this is possible at all, but if anyone has any experience with trying something like this I'd love to hear it.
I think you're close, according to this article it should be StringLike not StringEquals
"Condition": {
"ForAllValues:StringLike": {
"dynamodb:LeadingKeys": [
"{TENANTID}-*"
]
}
May also want to read the Multi-tenant SaaS Storage Strategies whitepaper
Edit
I don't beleive it's possible to have a static policy do what you want.
However the code in the linked article does provide the ability to "manage access from users from any tenant".
The key points are the use of the role/AccessDynamoWithTenantContext
tenantPolicy = getPolicy(event['tenantID'])
assumed_role = sts_client.assume_role(
RoleArn="arn:aws:iam::<account-id>:role/AccessDynamoWithTenantContext",
RoleSessionName="tenant-aware-product",
Policy=tenantPolicy,
)
And the dynamic injection of the tenentId in getPolicy()
policy = json.dumps(policyTemplate).replace("{TENANTID}", tenantID)
return policy
I have a database structure that looks like this:
Firestore-root
|
--- users (collection)
| |
| --- UidOne (document)
| |
| --- userName: "UserOne"
|
--- items (collection)
|
--- ItemIdOne (document)
| |
| --- itemName: "ItemOne"
|
--- ItemIdTwo
|
--- itemName: "ItemTwo"
What I want to achieve is to restrict every user from reading item names from each document within items collection using security rules. This is how I do it:
service cloud.firestore {
match /databases/{database}/documents {
match /items/{item} {
allow read, write: if false;
}
}
}
To display the item names I use the following query:
Query query = itemsRef.orderBy("itemName", Query.Direction.ASCENDING);
When I try to compile my app I get the following error:
com.google.firebase.firestore.FirebaseFirestoreException: PERMISSION_DENIED: Missing or insufficient permissions.
But the item names are still displayed in my RecyclerView. How can I stop this from happening?
Maybe check to see if your items are still coming from the local cache.
From this page add this to your OnEvent
String source = querySnapshot.getMetadata().isFromCache() ?
"local cache" : "server";
Log.d(TAG, "Data fetched from " + source);
If it is reading from the local cache you can set PersistenceEnabled(false) like this (also mentioned on that page):
FirebaseFirestoreSettings settings = new FirebaseFirestoreSettings.Builder()
.setPersistenceEnabled(false)
.build();
db.setFirestoreSettings(settings);
Even if you are online it will read from the local snapshot, and only updates the snapshot if the data changes. It's your rules that changed not your data. I found when testing with it set to true I got some unexpected results. I find I prefer it to be false when testing and changing code/rules.
I have a database node called (people) that looks like this:
people
|
|
-------UserID1 //which is a random id
| |
| |
| ----UserId2 //which is a random id
| |
| |
| name:"some_name"
| id:"UserId2"
| image:"image_url"
|
|
|
-------UserId2
|
|
----UserId3
|
|
name:"some_name"
id:"UserId3"
image:"image_url"
If we look at the (people / UserID1 / UserId2) node :
Since UserId1 and UserId2 are 2 random ids, then if we want to write a rule to UserId2 we will notice that it is 2 random id level deep.
What I want is to write a rule at this specified path that says these:
1) people / UserId1 : can be written by (UserID1) and (UserId2).
2) people / UserId1 : can be read by (UserID1) and (UserId2).
3) people / UserId1 / UserId2 : must end up with a newData that has (name, id, image).
How do I do this?
Thanks.
Due to the way Firebase Realtime Database rules cascade into deeper keys, allowing people/UserId1 to be writable by UserId2 is not advised, as this would allow UserId2 write access to the data of other users stored under people/UserId1 like people/UserId1/UserId3.
But using this trait, we can "add" users that are allowed read & write permissions as we go deeper into the data structure.
So the new conditions are:
people/UserId1 - UserId1 has read & write access
people/UserId1/UserId2 - UserId2 has read & write access
people/UserId1/UserId2 - must always contain 'name', 'id' and 'image' keys
people/UserId1/UserId3 - cannot be read/written by UserId2
{
"rules": {
"people": {
"$userId1": {
"$userId2": {
".read": "auth.uid == $userId2", // add $userId2 to those granted read permission, cascades into deeper keys
".write": "auth.uid == $userId2", // add $userId2 to those granted write permission, cascades into deeper keys
".validate": "newData.hasChildren(['name', 'id', 'image'])" // any new data must have 'name', 'id' and 'image' fields.
},
".read": "auth.uid == $userId1", // add $userId1 to those granted read permission, cascades into deeper keys
".write": "auth.uid == $userId1" // add $userId1 to those granted write permission, cascades into deeper keys
}
}
}
Lastly, if it is also required that people/UserId1/UserId2/id is equal to UserId2, you can change the ".validate" rule to enforce this:
".validate": "newData.hasChildren(['name', 'id', 'image']) && newData.child('id').val() == $userId2" // any new data must have 'name', 'id' and 'image' fields and 'id' must have a value of $userId2
I have multiple channels in my shop.
The channels all have exactly the same product.
Now i want to allow the user to choose between the different channels
in the checkout-process.
Optimally the shipping methods are grouped by channel.
e.g.
===========================
| Channel 1
===========================
| ( ) Pickup
| (x) Shipping
| => Move on
===========================
| Channel 2
===========================
| ( ) Pickup
| (x) Shipping
| => Move on
It i now select one of the options multiple things happen:
The channel is switched
The ShippingMethod is selectd
The Cart is transferred to the other channel
All these should be possible with sylius.
My biggest problem is kind of basic.
Sylius uses the ResourceBundle to load Entities from the repositories.
I have a route configured like that:
sylius_shop_checkout_select_shipping:
path: /select-shipping
methods: [GET, PUT]
defaults:
_controller: sylius.controller.order:updateAction
_sylius:
event: select_shipping
flash: false
template: SyliusShopBundle:Checkout:selectShipping.html.twig
form: Sylius\Bundle\CoreBundle\Form\Type\Checkout\SelectShippingType
repository:
method: find
arguments:
- "expr:service('sylius.context.cart').getCart()"
state_machine:
graph: sylius_order_checkout
transition: select_shipping
redirect:
route: sylius_shop_checkout_select_payment
parameters: []
There is ohne a single "repository" key allowed.
Do i have to build my own controller and use the repositiories directly from the service-container?