FreeRadius2 traffic counter not work - vpn

I've added this block to radiusd.conf
sqlcounter monthlytrafficcounter {
vcounter-name = Monthly-Traffic
check-name = Max-Monthly-Traffic
reply-name = Monthly-Traffic-LIMIT
sqlmod-inst = SQL
key = User-Name
reset = monthly
query = "SELECT SUM(acctinputoctets + acctoutputoctets) FROM radacct WHERE UserName='%{%k}' AND UNIX_TIMESTAMP(AcctStartTime) > '%b'"
}
and added the ditrctionary
ATTRIBUTE Max-Monthly-Traffic 3003 integer
ATTRIBUTE Monthly-Traffic-Limit 3004 integer
then added monthlytrafficcounter to authorize in /etc/freeradius/sites-enabled/default
but it doesn't work
The Max-Monthly-Traffic is defined in MySQL table radgroupcheck and the users has added to the group in radusergroup
Although a user has reached the traffic limit, he still can be authorized by FreeRadius:
http://i.stack.imgur.com/RIVsZ.jpg

Try moving your radiusd.conf block to sql/mysql/counter.conf

Related

terraform and kms key aliases

I am using the aws provider and trying to create an aws_workspaces_workspace with encrypted volumes.
I created an aws_kms_key with an associated alias (aws_kms_alias).
I specified the key alias (as a string) for volume_encryption_key.
The resource is created as expected and I can verify in the console that the volumes are encrypted with the specified key.
My issue is that every time I re-run terraform apply, terraform reports that the aws_workspaces_workspace needs to be replaced because of an update in the key value (from a key id to the alias)
How can I prevent this form happening? Is this a bug? Am I doing something incorrectly? Some of the relevant code is below.
resource "aws_workspaces_workspace" "workspace" {
directory_id = aws_workspaces_directory.ws-ad.id
bundle_id = var.bundle_id
user_name = var.username
root_volume_encryption_enabled = true
user_volume_encryption_enabled = true
volume_encryption_key = "alias/workspace-volume"
workspace_properties {
compute_type_name = "POWER"
user_volume_size_gib = 80
root_volume_size_gib = 50
running_mode = "AUTO_STOP"
running_mode_auto_stop_timeout_in_minutes = 60
}
}
resource "aws_kms_key" "kms-ws-volume" {
description = "Workspace Volume Encryption Key"
key_usage = "ENCRYPT_DECRYPT"
deletion_window_in_days = 30
is_enabled = true
}
resource "aws_kms_alias" "kms-ws-volume-alias" {
name = "alias/workspace-volume"
target_key_id = aws_kms_key.kms-ws-volume.key_id
}
Here's what terraform apply reports:
# aws_workspaces_workspace.workspace["1"] must be replaced
-/+ resource "aws_workspaces_workspace" "workspace" {
~ computer_name = "WSAMZN-T34E23BK" -> (known after apply)
~ id = "ws-v98b0y17z" -> (known after apply)
~ ip_address = "10.0.0.45" -> (known after apply)
~ state = "STOPPED" -> (known after apply)
tags = {
"Name" = "workspace-user1-env1"
"Owner" = "mario"
"Profile" = "dev"
"Stack" = "env1"
}
~ volume_encryption_key = "arn:aws:kms:us-west-2:927743275319:key/09de3db9-ecdd-4be1-a781-705fdd0294f9" -> "alias/workspace-volume" # forces replacement
# (6 unchanged attributes hidden)
# (1 unchanged block hidden)
}
Use the ARN of the key: aws_kms_key.kms-ws-volume.arn
volume_encryption_key is storing the ARN of the key, and therefore the plan detects a change.
The example on https://registry.terraform.io/providers/hcavarsan/aws/latest/docs/resources/workspaces_workspace might be misleading in this regard, despite an alias will also work.
Something similar happens with kms_key_id of aws_instance, in that it stores the ARN and not the key_id , and the plan always requires a volume replacement when using key_id instead of ARN. https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/instance#kms_key_id

AppMaker - Navigate to Last Page on Table

Scenario:
I have a calculated SQL that returns 100 results.
Added a table (from this calculated SQL) and limited the size of the page by 25 results.
This will generate 4 pages.
Pager form AppMaker works well (navigates between pages) but i need a button that navigates directly from page 1 to the page 4.
is this possible?
Anyone got a solution for this?
Regards
If you need to know how many entries your table has (in your case it's seems fixed to 100, but maybe it could grow), you can still do what you want:
E.g. say your table on YOURPAGE depends on a datasource called Customers.
Create a new Data item called CustomerCount, with just one field, called Count (integer).
Its data source would be a sql query script:
Select count(CustomerName) as Count from Customers
on the page you are having the table on, add a custom property (say called
Count of type integer)
In the page attach event, set the property asynchronously with this custom action:
app.datasources.CustomerCount.load(function() {
app.pages.YOURPAGE.properties.Count = app.datasources.CustomerCount.count;
app.datasources.Customers.query.pageIndex = #properties.Count / 25;
app.datasources.Customers.datasource.load();
});
I tried similar things successfully in the past.
Found a solution for this:
ServerScript:
function CandidateCountRows() {
var query = app.models.candidate.newQuery();
var records = query.run();
console.log("Number of records: " + records.length);
return records.length;
}
in the button code:
var psize = widget.datasource.query.pageSize;
var pidx = widget.datasource.query.pageIndex;
var posicao = psize * pidx;
var nreg = posicao;
google.script.run.withSuccessHandler(function(Xresult) {
nreg = Xresult;
console.log('position: ' + posicao);
console.log('nreg: ' + nreg);
console.log('psize: ' + psize);
console.log('pidx: ' + pidx);
var i;
for (i = pidx; i < (nreg/psize); i++) {
widget.datasource.nextPage();
}
widget.datasource.selectIndex(1);
}).CandidateCountRows();
This will allow to navigate to last page.
If you know for a fact that your query always returns 100 records and that your page size will always be 25 records then the simplest approach is to make sure your button is tied to the same datasource and attach the following onClick event:
widget.datasource.query.pageIndex = 4;
widget.datasource.load();

How does MaxItemCount from FeedOption and RetrievedDocumentCount from QueryMetric works in Cosmos DB and why both never match?

I am currently facing query performance issue with Cosmos DB and I am quite sure I have followed most of the performance tips from Microsoft page but still query takes > 1 second.
Connection policy
private static readonly ConnectionPolicy ConnectionPolicy = new ConnectionPolicy
{
ConnectionMode = ConnectionMode.Direct,
ConnectionProtocol = Protocol.Tcp,
RequestTimeout = new TimeSpan(1, 0, 0),
MaxConnectionLimit = 1000,
RetryOptions = new RetryOptions
{
MaxRetryAttemptsOnThrottledRequests = 10,
MaxRetryWaitTimeInSeconds = 60
}
};
Document Client
this.Client = new DocumentClient(new Uri(config.DocumentDBURI), config.DocumentDBKey, ConnectionPolicy);
Document Query
FeedOptions options = new FeedOptions
{
MaxItemCount = config.getSearchLimit,//// which is 100
PartitionKey = new PartitionKey(partitionKey),
RequestContinuation = responseContinuation
};
var documentQuery = Client.CreateDocumentQuery<SearchByAttributesResult>(
this.TenantCollectionUri,
querySpec,
options).AsDocumentQuery();
Query 1
SELECT p.Doc.id, p.Doc.Name, p.Doc.isOrganization,p.Doc.organizationLegalName, p.Doc.isFactoryAutoUpdate,p.Doc.StartDate, p.Doc.EndDate, p.Doc.InactiveReasonCode,p.Doc.Specialty.specialty AllSpecialty, Address from p JOIN Address IN p.Doc.Address.address WHERE (p.Doc.EndDate = null or (p.Doc.StartDate <= #STARTDATE and p.Doc.EndDate >= #ENDDATE)) and CONTAINS(p.Doc.Name, #PROVIDERNAME) and Address.alpha2Code= #ALPHA2CODE
Query 2
SELECT p.Doc.id, p.Doc.Name, p.Doc.isOrganization,p.Doc.organizationLegalName, p.Doc.isFactoryAutoUpdate,p.Doc.StartDate, p.Doc.EndDate, p.Doc.InactiveReasonCode,p.Doc.Specialty.specialty AllSpecialty, Address from p JOIN Address IN p.Doc.Address.address WHERE (p.Doc.EndDate = null or (p.Doc.StartDate <= #STARTDATE and p.Doc.EndDate >= #ENDDATE)) and STARTSWITH(Address.postalCode, #POSTALCODE) and Address.alpha2Code= #ALPHA2CODE
above query changes based on user search condition
I have only 900 documents in my collection but still query takes > 1 seconds always.
trying to understand few points here
Though I set MaxItemCount to 100 why I am seeing RetrievedDocumentCount from QueryMetrics as 900?
use of CONTAINS/STARTSWITH causing this performance issue?
What's wrong I am doing here and how can i improve this query performance into sub-seconds ( <.5s)
First things first, MaxItemCount doesn't mean that you will get the top 100 documents.
It means that every iteration of ExecuteNextAsync will return up to 100 documents at a time, but up to everything that matches this query.
If you want to limit your results to the top 100 then, in LINQ use the .Take(100) method before you use AsDocumentQuery or in SQL use the TOP keyword.
In terms of performance, it's bad for three reasons.
Checking for records between range of dates
You are using the CONTAINS/STARTSWITH function.
You are joining
At this point, if changing the schema isn't an option, I would recommend reading more about Indexing and optimising it based on the querying requirements of your application.

ScalikeJDBC + SQlite: Cannot change read-only flag after establishing a connection

Trying to get working ScalikeJDBC and SQLite. Have a simple code based on provided examples:
import scalikejdbc._, SQLInterpolation._
object Test extends App {
Class.forName("org.sqlite.JDBC")
ConnectionPool.singleton("jdbc:sqlite:test.db", null, null)
implicit val session = AutoSession
println(sql"""SELECT * FROM kv WHERE key == 'seq' LIMIT 1""".map(identity).single().apply()))
}
It fails with exception:
Exception in thread "main" java.sql.SQLException: Cannot change read-only flag after establishing a connection. Use SQLiteConfig#setReadOnly and QLiteConfig.createConnection().
at org.sqlite.SQLiteConnection.setReadOnly(SQLiteConnection.java:447)
at org.apache.commons.dbcp.DelegatingConnection.setReadOnly(DelegatingConnection.java:377)
at org.apache.commons.dbcp.PoolingDataSource$PoolGuardConnectionWrapper.setReadOnly(PoolingDataSource.java:338)
at scalikejdbc.DBConnection$class.readOnlySession(DB.scala:138)
at scalikejdbc.DB.readOnlySession(DB.scala:498)
...
I've tried both scalikejdbc 1.7 and 2.0, error remains. As sqlite driver I use "org.xerial" % "sqlite-jdbc" % "3.7.+".
What can I do to fix the error?
The following will create two separate connections, one for read-only operations and the other for writes.
ConnectionPool.add("mydb", s"jdbc:sqlite:${db.getAbsolutePath}", "", "")
ConnectionPool.add(
"mydb_ro", {
val conf = new SQLiteConfig()
conf.setReadOnly(true)
val source = new SQLiteDataSource(conf)
source.setUrl(s"jdbc:sqlite:${db.getAbsolutePath}")
new DataSourceConnectionPool(source)
}
)
I found that the reason is that you're using "org.xerial" % "sqlite-jdbc" % "3.7.15-M1". This version looks still unstable.
Use "3.7.2" as same as #kawty.
Building on #Synesso's answer, I expanded slightly to be able to get config value from config files and to set connection settings:
import scalikejdbc._
import scalikejdbc.config.TypesafeConfigReader
case class SqlLiteDataSourceConnectionPool(source: DataSource,
override val settings: ConnectionPoolSettings)
extends DataSourceConnectionPool(source)
// read settings for 'default' database
val cpSettings = TypesafeConfigReader.readConnectionPoolSettings()
val JDBCSettings(url, user, password, driver) = TypesafeConfigReader.readJDBCSettings()
// use those to create two connection pools
ConnectionPool.add("db", url, user, password, cpSettings)
ConnectionPool.add(
"db_ro", {
val conf = new SQLiteConfig()
conf.setReadOnly(true)
val source = new SQLiteDataSource(conf)
source.setUrl(url)
SqlLiteDataSourceConnectionPool(source, cpSettings)
}
)
// example using 'NamedDB'
val name: Option[String] = NamedDB("db_ro") readOnly { implicit session =>
sql"select name from users where id = $id".map(rs => rs.string("name")).single.apply()
}
This worked for me with org.xerial/sqlite-jdbc 3.28.0:
String path = ...
SQLiteConfig config = new SQLiteConfig();
config.setReadOnly(true);
return DriverManager.getConnection("jdbc:sqlite:" + path, config.toProperties());
Interestingly, I wrote a different solution on the issue on the xerial repo:
PoolProperties props = new PoolProperties();
props.setDriverClassName("org.sqlite.JDBC");
props.setUrl("jdbc:sqlite:...");
Properties extraProps = new Properties();
extraProps.setProperty("open_mode", SQLiteOpenMode.READONLY.flag + "");
props.setDbProperties(extraProps);
// This line can be left in or removed; it no longer causes a problem
// as long as the open_mode code is present.
props.setDefaultReadOnly(true);
return new DataSource(props);
I don't recall why I needed the second, and was then able to simplify it back to the first one. But if the first doesn't work, you might try the second. It uses a SQLite-specific open_mode flag that then makes it safe (but unnecessary) to use the setDefaultReadOnly call.

Linq to dataset, Query optimization

I have following linq queries:
var itembind = (from q in dsSerach.Tables[0].AsEnumerable()
select new
{
PatternID = q.Field<int>("PatternID"),
PatternName = q.Field<string>("PatternName") + " " + q.Field<string>("ColorID") + q.Field<string>("BookID"),
ColorID = q.Field<string>("ColorID"),
BookID = q.Field<string>("BookID"),
CoverImage = (from img1 in objJFEntities.ProductImages.ToList()
where img1.PatternName.ToLower() == q.Field<string>("PatternName").ToLower()
select new CoverImage
{
URL = "Images/MediumPatternImages/" +
q.Field<string>("PatternName") + "_" + q.Field<string>("ColorID") + q.Field<string>("BookID") + q.Field<string>("ImageExtension"),
ID = q.Field<int>("ProductImageID")
}).FirstOrDefault(),
TotalCount = q.Field<int>("TotalCount")
}).Distinct();
var patterns = (from r in itembind
group r by new { r.PatternID, r.ColorID } into g
select new SearchPattern
{
PatternID = g.Key.PatternID,
PatternName = string.Join(",", g.OrderBy(s => s.ColorID).OrderBy(s => s.BookID)
.Select(s => String.Format("<a href='{0:s}' title='{1:s}'>{2:s}</a><br />",
new object[] { String.Format("Product.aspx?ID={0}&img={1}", g.Key.PatternID, s.CoverImage.ID), s.PatternName, s.PatternName })).FirstOrDefault()),
CoverImage = g.Count() > 1 ? (from img1 in objJFEntities.ProductImages.ToList()
where img1.ProductImageID == g.Select(i => i.CoverImage.ID).FirstOrDefault() && img1.ColorID.ToString() == g.Key.ColorID
select new CoverImage
{
URL = "Images/MediumPatternImages/" +
img1.PatternName + "_" + img1.ColorID + img1.BookID + img1.ImageExtension,
ID = img1.ProductImageID
}).FirstOrDefault() : g.Select(i => i.CoverImage).FirstOrDefault()
}).ToList();
these queries are taking more then 1 minute to execute for the 1000 records only.
The dsSearch is a dataset filled with records returned from my procedure in SQL.
Am using entity framework. The site is deployed with IIS7.0. The SQL server 2008 is in use.
I got "Error Message:Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding." ,
"Cannot open database "DB" requested by the login. The login failed." & "The underlying provider failed on Open." kind of error very frequently site.
Please tell me how to optimize such a query.
EDIT:
Here is the procedure
http://pastie.org/7160934
In the first query you are doing a objJFEntities.ProductImages.ToList() , with the ToList() call you are fetching every entry from the database, and later filter the results in memory.
Rolfvm is correct in pointing out that objJFEntities.ProductImages causes the problem, but the analysis is a bit different. You fetch the entire ProductImages table into memory for each iteration of the query when you enumerate over it. So one optimization would be to fetch the images first in a collection and use that collection in the query statement
var localImages = objJFEntities.ProductImages.ToList();
...
CoverImage = (from img1 in localImages....
But then, your query seems to do far too much. You build the first part itembind without executing it. Then you build the second part (var patterns = (from r in itembind) and execute it by ToList(). But in the second part you never use the CoverImage from the first part. So creating these is a waste of resources. (Or you skimmed the code, hiding another use of the first part).

Resources