In dhcp packet, a field means client hardware address, but it's not the same as mac address like "fa:16:3e:6f:1a:9d".
If I have known a interface's mac address "fa:16:3e:6f:1a:9d", how to evaluate chaddr by mac address?
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| op (1) | htype (1) | hlen (1) | hops (1) |
+---------------+---------------+---------------+---------------+
| xid (4) |
+-------------------------------+-------------------------------+
| secs (2) | flags (2) |
+-------------------------------+-------------------------------+
| ciaddr (4) |
+---------------------------------------------------------------+
| yiaddr (4) |
+---------------------------------------------------------------+
| siaddr (4) |
+---------------------------------------------------------------+
| giaddr (4) |
+---------------------------------------------------------------+
| |
| chaddr (16) |
| |
| |
+---------------------------------------------------------------+
| |
| sname (64) |
+---------------------------------------------------------------+
| |
| file (128) |
+---------------------------------------------------------------+
| |
| options (variable) |
+---------------------------------------------------------------+
See https://www.ietf.org/rfc/rfc2131.txt
4.4.1 Initialization and allocation of network address
...
The client MUST include its hardware address in the 'chaddr'
field, if necessary for delivery of DHCP reply messages.
First six bytes contain the hardware address, the rest are zeros. One can inspect the contents of bootp/dhcp packets for example in Linux with dhcpdump.
Related
I am facing the issue with cinder volume usage calculation, you can see from the below output that the 10GB volume is in reserved status and total usage is not included this 10Gb. Is there anyway to clear this or update In_use, out actual usage is 67GB but cinder usage showing only 57GB and remaining marked as Reserved.
cinder quota-usage 82eaddf1f348142cabbed0d2ff7e213a0
+----------------------+--------+----------+-------+
| Type | In_use | Reserved | Limit |
+----------------------+--------+----------+-------+
| backup_gigabytes | 0 | 0 | 1000 |
| backups | 0 | 0 | 10 |
| gigabytes | 57 | 10 | 1000 |
| gigabytes_Local | 57 | 10 | 1000 |
| per_volume_gigabytes | 0 | 0 | -1 |
| snapshots | 0 | 0 | 10 |
| snapshots_Local | 0 | 0 | -1 |
| volumes | 6 | 1 | 10 |
| volumes_Local | 6 | 1 | -1 |
+----------------------+--------+----------+-------+
This seems to be a bug, it has been reported under https://bugzilla.redhat.com/show_bug.cgi?id=1515576
Assuming the following table and using sqlite I have the following question:
Node |Loadcase | Fx | Cluster
---------------------------------
1 | 1 | 50 | A
2 | 1 | -40 | A
3 | 1 | 60 | B
4 | 1 | 80 | C
1 | 2 | 50 | A
2 | 2 | -50 | A
3 | 2 | 80 | B
4 | 2 | -100 | C
I am trying to write a query which fetches the maximum absolute value of Fx and the Load case for each Node 1-4.
An additional requirement is that Fx having the same Cluster shall be summed up before making this query .
In the example above I would expect the following results:
Node | Loadcase | MaxAbsClusteredFx
-----|-----------|-------------------
1 | 1 | 10
2* | |
3 | 2 | 80
4 | 2 | 100
N/A because summed up with node one. Both belonging to cluster A
Query:
For Node 1 I would execute a query similar to this
SELECT Loadcase,abs(Fx GROUP BY Cluster) FROM MyTable WHERE abs(Fx GROUP BY Cluster) = max(abs(Fx GROUP BY Cluster)) AND Node = 1
I keep getting " Error while executing query: near "Forces": syntax error " or alike.
Thankful for any help!
I have tried different things, but none succeeded. I have the following issue, and would be very gratefull if someone could help me.
I get the data from a view as several billions of records, for different measures
A)
| s_c_m1 | s_c_m2 | s_c_m3 | s_c_m4 | s_p_m1 | s_p_m2 | s_p_m3 | s_p_m4 |
|--------+--------+--------+--------+--------+--------+--------+--------|
| 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
| 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 |
|--------+--------+--------+--------+--------+--------+--------+--------|
Then I need to aggregate it by each measure. And so long so fine. I got this figured out.
B)
| s_c_m1 | s_c_m2 | s_c_m3 | s_c_m4 | s_p_m1 | s_p_m2 | s_p_m3 | s_p_m4 |
|--------+--------+--------+--------+--------+--------+--------+--------|
| 3 | 6 | 9 | 12 | 15 | 18 | 21 | 24 |
|--------+--------+--------+--------+--------+--------+--------+--------|
Then I need to get the data in the following form. I need to turn it into a key-value form.
C)
| measure | c | p |
|---------+----+----|
| m1 | 3 | 15 |
| m2 | 6 | 18 |
| m3 | 9 | 21 |
| m4 | 12 | 24 |
|---------+----+----|
The first 4 columns from B) would form in C) the first column, and the second 4 columns would form another column.
Is there an elegant way, that could be easily maintainable? The perfect solution would be if another measure would be introduced in A) and B), there no modification would be required and it would automatically pick up the difference.
I know how to get this done in SqlServer and Postgres, but here I am missing the expirience.
I think you should use map for this
Is it possible to find out how much each mobile application consumes the battery per day (using R language) , where I have data collection of the following fields
record_id ,
date_time,
application_name,
battery_level,
battery_status
battery_level (It is a number represents the available percentage of the battery)
battery_status ( status of the battery : charging , discharging , full)
This calculation is based on the collected data.
example of such data :
+-----------+------------------+---------------------+---------------+----------------+
| record_id | application_name | date_time | battery_level | battery_status |
+-----------+------------------+---------------------+---------------+----------------+
| 473849 | viber | 2015-09-01 21:34:01 | 7 | Charging |
| 473850 | watsup | 2015-09-01 21:34:01 | 7 | Charging |
| 473851 | AccuWeather | 2015-09-01 21:34:01 | 7 | Charging |
+-----------+------------------+---------------------+---------------+----------------+
as I understood that it is not possible to calculate battery Consumption of
each running mobile application using data collected in my first post.
Let us have another data collection .
assuming that we have the following data ,
cpu usage per each running application and
memory usage per each running application
as the following
+-----------+------------------+---------------------+---------------------------------+------------------------------------+
| record_id | application_name | date_time | cpu_usage_per_app_in_percentage | memory_usage_per_app_in_percentage |
+-----------+------------------+---------------------+---------------------------------+------------------------------------+
| 473849 | viber | 2015-09-06 19:23:13 | 5 | 2 |
| 473850 | watsup | 2015-09-06 19:23:13 | 9 | 2 |
| 473851 | AccuWeather | 2015-09-06 19:23:13 | 8 | 4 |
| 473980 | viber | 2015-09-06 19:23:14 | 4 | 1 |
| 474254 | watsup | 2015-09-06 19:23:14 | 9 | 1 |
| 474323 | AccuWeather | 2015-09-06 19:23:14 | 9 | 2 |
| 474533 | viber | 2015-09-06 19:23:15 | 5 | 2 |
| 474536 | watsup | 2015-09-06 19:23:15 | 8 | 3 |
| 474537 | AccuWeather | 2015-09-06 19:23:15 | 5 | 3 |
| 474538 | calendar | 2015-09-06 19:23:15 | 7 | 3 |
+-----------+------------------+---------------------+---------------------------------+------------------------------------+
you can suggest any other way of data collection , the key question is that is it possible to make calculation of Battery Consumption of earch running mobile application ? if so how and what the data to be collected?
Normally, I can do show partitions <table> in hive. But when it is a parquet table, hive does not understand it. I can go to hdfs and check the dir structure, but that is not ideal. Is there any better way to do that?
I am using Impala 1.4.0 and I can see partitions.
From the impala-shell give the command:
show partitions <mytablename>
I have something looking like this:
+-------+-------+-----+-------+--------+---------+--------------+---------+
| year | month | day | #Rows | #Files | Size | Bytes Cached | Format |
+-------+-------+-----+-------+--------+---------+--------------+---------+
| 2013 | 11 | 1 | -1 | 3 | 25.87MB | NOT CACHED | PARQUET |
| 2013 | 11 | 2 | -1 | 3 | 24.84MB | NOT CACHED | PARQUET |
| 2013 | 11 | 3 | -1 | 2 | 19.05MB | NOT CACHED | PARQUET |
| 2013 | 11 | 4 | -1 | 3 | 23.63MB | NOT CACHED | PARQUET |
| 2013 | 11 | 5 | -1 | 3 | 26.56MB | NOT CACHED | PARQUET |
Alternatively you can go to your table in HDFS . They are normally seen in this path:
/user/hivestore/warehouse/<mytablename> or
/user/hive/warehouse/<mytablename>
Unfortunately no. Issue is open though. So checking it manually seems to be the only option right now.