Forward sip call to number - asterisk

I want to forward sip call like this :
----------
1001 User1
1002 User2
2001 User3
3001 User4
----------
When User1 (1001) call 1, I want to forward call to User3 (2001).
When User2 (1002) call 1, I want to forward call to User4 (3001).
Anybody know how can I do that in asterisk?

You should do DIFFERENT dialplan for each user.
Something like this
[from-internal]
exten => 1/1001,1,Dial(SIP/2001,,o); extension 1 callerid 1001
exten => 1/1002,1,Dial(SIP/3001,,o);extension 1 callerid 1002

Related

How to restart SQLite auto generated id

I have a table with three columns: Id, Value, User. For each user, I want a maximum of 50 values in the table. When a user inserts the value with Id = 50, then before inserting the 51th value, I want to delete the first one (the one with Id=1) and substitute it with the new one, that will have Id=1. This operation must be specific for each user. I use an example to be more clear:
Id
Value
User
50
124
User1
49
67
User2
50
89
User3
Suppose User1 wants to add a new value, the result must be this one:
Id
Value
User
49
67
User2
50
89
User3
1
101
User1
Of course, Id, Value and User can't be primary keys. So, first of all: is a primary key needed? If not, the problem is easy to solve. Otherwise, I think I will have to add a new column, let's call it PK, that will be primaty key and auto generated. From what I read online, if I delete a row, the auto generated key will not restart automatically, so the situation will be more or less like this one:
PK
Id
Value
User
1
50
124
User1
2
49
67
User2
3
50
89
User3
and after the update, will be:
PK
Id
Value
User
2
49
67
User2
3
50
89
User3
4
1
101
User1
I would like that, when the value 101 is inserted, PK is set to 1, and if User2 adds a new vallue, it is assigned PK=4 (as 2 and 3 are already used)
PK
Id
Value
User
2
49
67
User2
3
50
89
User3
1
1
101
User1
4
50
33
User2
Is it possible?
I ask it because a new value is inserted more or less every 5 seconds by each user, and the number of users is very high so I am worried that auto generated PK could reach a limit very quickly, if it is not "restarted"

What is the purpose of payload in packet too big ICMPv6 message

I have gone through the RFC 4443 and 8201, perhaps I did not understand as I am new to some terminology described in these RFC but I want to understand the implication of using payload in ICMP packet to big message?
As per the RFC 4443
Linkt to 4443 RFC
The payload will contain as much of invoking packet not exceeding the minimum path MTU in packet to big message.
I don't understand the use case of such payload, even in the RFC 8201 there is no mention about the usages of payload.
only one comment present was
Added clarification in Section 4, "Protocol Requirements", that
nodes should validate the payload of ICMP PTB messages per RFC
4443, and that nodes should detect decreases in PMTU as fast as
possible.
What can be the implication validating payload in the PTB message and how should we validate the payload and based on what conditions?
Packet Too Big Message as per RFC 4443.
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Type | Code | Checksum |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| MTU |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| As much of invoking packet |
+ as possible without the ICMPv6 packet +
| exceeding the minimum IPv6 MTU [IPv6] |

Are messages dropped on RAFT?

I'm reading about raft, but I got a bit confused when it comes to consensus after a network partition.
So, considering a cluster of 2 nodes, 1 Leader, 1 Follower.
Before partitioning, there where X messages written, successfully replicated, and then imagine that a network problem caused partitioning, so there are 2 partitions, A(ex-leader) and B(ex-follower), which are now both leaders (receiving writes):
before partition | Messages |x| Partition | Messages
Leader | 0 1 2 3 4 |x| Partition A | 5 6 7 8 9
Follower | 0 1 2 3 4 |x| Partition B | 5' 6' 7' 8' 9'
After the partition event, we've figured it out, what happens?
a) We elect 1 new leader and consider its log? (dropping messages of the new follower?
e.g:
0 1 2 3 4 5 6 7 8 9 (total of 10 messages, 5 dropped)
or even:
0 1 2 3 4 5' 6' 7' 8' 9' (total of 10 messages, 5 dropped)
(depending on which node got to be leader)
b) We elect a new leader and find a way to make consensus of all the messages?
0 1 2 3 4 5 5' 6 6' 7 7' 8 8' 9 9' (total of 15 messages, 0 dropped)
if b, is there any specific way of doing that? or it depends on client implementation? (e.g.: message timestamp...)
The leaders log is taken to be "the log" when the leader is elected and has successfully written its initial log entry for the term. However in your case the starting premise is not correct. In a cluster of 2 nodes, a node needs 2 votes to be leader, not 1. So given a network partition neither node will be leader.

Creating summary table of user event data

Edit 2: I realized I can use dcast() to do what I want to do. However I do not want to count all of the events in the Event Data, only those that happened before a date specified in another data set. I can't seem to figure out how to use the subset argument in dcast(). So far I've tried:
dcast(dt.events, Email ~ EventType, fun.aggregate = length, subset = as.Date(Date) <=
as.Date(dt.users$CreatedDate[dt.users$Email = dt.events$Email]))
However this doesn't work. I could add the CreatedDate column from the dt.users to the dt.events. And then subset using:
dcast(dt.events, Email ~ EventType, fun.aggregate = length, subset = as.Date(Date) <=
as.Date(CreatedDate)
I was wondering if it was possible to do this without having to add the extra column?
Edit: Just calculated that it'll probably take about 37 hours to complete in the way I'm currently doing it, so if anyone has any tips to make this faster. Please let me know :)
I'm new to R, I've figured out a way to do what I want to do. But it's extremely inefficient, and takes hours to complete.
I have the following:
Event data:
UserID Email EventType Date
User1 User1#*.com Type2 2016-01-02
User1 User1#*.com Type6 2016-01-02
User1 User1#*.com Type1 2016-01-02
User1 User1#*.com Type3 2016-01-02
User2 User2#*.com Type1 2016-01-02
User2 User2#*.com Type1 2016-01-02
User2 User2#*.com Type2 2016-01-02
User3 User3#*.com Type1 2016-01-02
User3 User3#*.com Type3 2016-01-02
User1 User1#*.com Type2 2016-01-04
User1 User1#*.com Type2 2016-01-04
User2 User2#*.com Type5 2016-01-04
User3 User3#*.com Type1 2016-01-04
User3 User3#*.com Type4 2016-01-04
Every time a user does something, an event is recorded with an event type, with a time stamp.
User list from different database:
UserID Email CreatedDate
DxUs1 User1#*.com 2016-01-01
DxUs2 User2#*.com 2016-01-03
DxUs3 User3#*.com 2016-01-03
I want to get the following:
A summarized list which counts the amount of each event type in the Event Data for each user in the User List. However, events should only be counted if the "CreatedDate" in the user list is before or equal to the "Date" in the Event Data.
So for the above data I would eventually want to get:
Email Type1 Type2 Type3 Type4 Type5 Type6
User1#*.com 1 3 1 0 0 1
User2#*.com 0 0 1 0 1 0
User3#*.com 1 0 0 1 0 0
How I've managed to do it so far
I've been able to do this by first creating a 'dt.master' data.table that includes all the columns for all events and the list of Emails. Which looks like this:
Email Type1 Type2 Type3 Type4 Type5 Type6
User1#*.com 0 0 0 0 0 0
User2#*.com 0 0 0 0 0 0
User3#*.com 0 0 0 0 0 0
And then filling out this table using the while loop below:
# The data sets
dt.events # event data
dt.users # user list
dt.master # blank master table
# Loop that fills master table
counter_limit = group_size(dt.master)
index = 1
while (index <= counter_limit) {
# Get events of user at current index
dt.events.temp = filter(dt.events, dt.events$Email %in% dt.users$Email[index],
as.Date(dt.events$Date) <= as.Date(dt.users$CreatedDate[index]))
# Count all the different events
dt.event.counter = as.data.table(t(as.data.table(table(dt.events.temp$EventType))))
# Clean the counter by 1: Rename columns to event names, 2: Remove event names row
names(dt.event.counter) = as.character(unlist(dt.event.counter[1,]))
dt.event.counter = dt.event.counter[-1]
# Fill the current index in on the blank master table
set(dt.master, index, names(dt.event.counter), dt.event.counter)
index = index + 1
}
The Problem
This does work... However I am dealing with 9+ million events, 250k+ users, 150+ Event Types. Therefore the above while loop takes HOURS before it has been processed. I tested it with a small batch of 500 users, which had the following processing time:
user system elapsed
179.33 62.92 242.60
I'm still waiting for the full batch to be processed haha. I've read somewhere that loops should be avoided, as they take a lot of time. However I am completely new to R and programming in general, and I've been learning through trial/error and Googling whatever I've needed. Clearly that leads to some messy code. I was wondering if anyone could point me in the direction of something that might be faster/more efficient?
Thanks!
Edit: Just calculated that it'll probably take about 37 hours to complete in the way I'm currently doing it, so if anyone has any tips to make this faster. Please let me know :)
TL,DR: My event aggregation/summarization code takes several hours to process my data (it's still not done). Is there any faster way to do it?
Assuming your data is already in a data.table, you could use the fun.aggregate parameter in dcast:
dcast(dat, Email ~ EventType, fun.aggregate = length)
gives:
Email Type1 Type2 Type3 Type4 Type5 Type6
1: User1#*.com 1 2 1 0 0 1
2: User2#*.com 4 1 0 0 1 0
3: User3#*.com 0 1 1 1 0 0
In response to the comments & updated question: you can get the desired result by using non-equi joins inside the dcast-function:
dcast(dt.events[dt.users, on = .(Email, Date >= CreatedDate)],
Email ~ EventType, fun.aggregate = length)
which gives:
Email Type1 Type2 Type3 Type4 Type5 Type6
1: User1#*.com 1 2 1 0 0 1
2: User2#*.com 1 0 0 0 1 0
3: User3#*.com 0 1 0 1 0 0
untested
library(dpylr)
library(tidyr)
your.dataset %>%
count(Email, EventType) %>%
spread(EventType, n)

How to join data from three different spreadsheet?

I have 3 tsv files containing different data on my employees. I can join these data with the last name and first name of the employees, which appear in each file.
I would like to gather all the data for each employee in only one spreadsheet.
(I can't just do copy/past of the columns because some employees are not in file number 2 for example but will be in file number 3).
So I think - I am a beginner - a script could do that, for each employee (a row), gather as much data as possible from the files in a new tsv file.
Edit.
Example of what I have (in reality I have approximatively 300 rows for each file, some emloyees are not in all files).
file 1
john hudson 03/03 male
mary kate 34/04 female
harry loup 01/01 male
file 2
harry loup 1200$
file3
mary kate atlanta
What I want :
column1 colum2 column3 column4 column5 column6
john hudson 03/03 male
mary kate 34/04 female atlanta
harry loup 01/01 male 1200$
It would help me a lot!
Use this python script:
import sys, re
r=[]
i = 0
res = []
for f in sys.argv[1:]:
r.append({})
for l in open(f):
a,b = re.split('\s+', l.rstrip(), 1)
r[i][a] = b
if i == 0:
res += [a]
i += 1
for l in res:
print l," ".join(r[k].get(l, '-') for k in range(i))
The script loads each file into the dictionary (the first column is used as a key).
Then the script iterates through the values of the first column in the first file and
writes correspondent values from the dictionaries (that were created from the other files).
Example of usage:
$ cat 1.txt
user1 100
user2 200
user3 300
$ cat 2.txt
user2 2200
user3 2300
$ cat 3.txt
user1 1
user3 3
$ python 1.py [123].txt
user1 100 - 1
user2 200 2200 -
user3 300 2300 3
If you're familiar with SQL then you can use the perl DBD::CSV module todo the job, easily. But that also depends on whether you're comfortable writing perl.

Resources