Call and Message Data
1. Introduction
In niimpy
, communication data includes calls and SMS information. These data can reveal important information about people’s circadian rhythm, social patterns, and activity, just to mention a few. Therefore, it is important to organize this information for further processing and analysis. To address this, niimpy
includes a set of functions to clean, downsample, and extract features from communication data. The available features are:
call_duration_total
: duration of incoming and outgoing callscall_duration_mean
: mean duration of incoming and outgoing callscall_duration_median
: median duration of incoming and outgoing callscall_duration_std
: standard deviation of incoming and outgoing callscall_count
: number of calls within a time windowcall_outgoing_incoming_ratio
: number of outgoing calls divided by the number of incoming callssms_count
: count of incoming and outgoing text messagesextract_features_comms
: wrapper to extract several features at the same time
In the following, we will analyze call logs provided by niimpy
as an example to illustrate the use of niimpy’s communication preprocessing functions.
2. Read data
Let’s start by reading the example data provided in niimpy
. These data have already been shaped in a format that meets the requirements of the data schema. Let’s start by importing the needed modules. Firstly we will import the niimpy
package and then we will import the module we will use (communication) and give it a short name for use convinience.
[1]:
import niimpy
import niimpy.preprocessing.communication as com
from niimpy import config
import pandas as pd
import warnings
warnings.filterwarnings("ignore")
Now let’s read the example data provided in niimpy
. The example data is in csv
format, so we need to use the read_csv
function. When reading the data, we can specify the timezone where the data was collected. This will help us handle daylight saving times easier. We can specify the timezone with the argument tz. The output is a dataframe. We can also check the number of rows and columns in the dataframe.
[2]:
data = niimpy.read_csv(config.MULTIUSER_AWARE_CALLS_PATH, tz='Europe/Helsinki')
data.shape
[2]:
(38, 6)
The data was succesfully read. We can see that there are 38 datapoints with 6 columns in the dataset. However, we do not know yet what the data really looks like, so let’s have a quick look:
[3]:
data.head()
[3]:
user | device | time | call_type | call_duration | datetime | |
---|---|---|---|---|---|---|
2020-01-09 02:08:03.895999908+02:00 | jd9INuQ5BBlW | 3p83yASkOb_B | 1.578528e+09 | incoming | 1079 | 2020-01-09 02:08:03.895999908+02:00 |
2020-01-09 02:49:44.969000101+02:00 | jd9INuQ5BBlW | 3p83yASkOb_B | 1.578531e+09 | outgoing | 174 | 2020-01-09 02:49:44.969000101+02:00 |
2020-01-09 02:22:57.168999910+02:00 | jd9INuQ5BBlW | 3p83yASkOb_B | 1.578529e+09 | outgoing | 890 | 2020-01-09 02:22:57.168999910+02:00 |
2020-01-09 02:27:21.187000036+02:00 | jd9INuQ5BBlW | 3p83yASkOb_B | 1.578530e+09 | outgoing | 1342 | 2020-01-09 02:27:21.187000036+02:00 |
2020-01-09 02:47:16.177000046+02:00 | jd9INuQ5BBlW | 3p83yASkOb_B | 1.578531e+09 | incoming | 645 | 2020-01-09 02:47:16.177000046+02:00 |
[4]:
data.tail()
[4]:
user | device | time | call_type | call_duration | datetime | |
---|---|---|---|---|---|---|
2019-08-12 22:10:21.503999949+03:00 | iGyXetHE3S8u | Cq9vueHh3zVs | 1.565637e+09 | incoming | 715 | 2019-08-12 22:10:21.503999949+03:00 |
2019-08-12 22:27:19.923000097+03:00 | iGyXetHE3S8u | Cq9vueHh3zVs | 1.565638e+09 | outgoing | 225 | 2019-08-12 22:27:19.923000097+03:00 |
2019-08-13 07:01:00.960999966+03:00 | iGyXetHE3S8u | Cq9vueHh3zVs | 1.565669e+09 | outgoing | 1231 | 2019-08-13 07:01:00.960999966+03:00 |
2019-08-13 07:28:27.657999992+03:00 | iGyXetHE3S8u | Cq9vueHh3zVs | 1.565671e+09 | incoming | 591 | 2019-08-13 07:28:27.657999992+03:00 |
2019-08-13 07:21:26.436000109+03:00 | iGyXetHE3S8u | Cq9vueHh3zVs | 1.565670e+09 | outgoing | 375 | 2019-08-13 07:21:26.436000109+03:00 |
By exploring the head and tail of the dataframe we can form an idea of its entirety. From the data, we can see that:
rows are observations, indexed by timestamps, i.e. each row represents a call that was received/done/missed at a given time and date
columns are characteristics for each observation, for example, the user whose data we are analyzing
there are at least two different users in the dataframe
there are two main columns:
call_type
andcall_duration
. In this case, thecall_type
columns stores information about whether the call was incoming, outgoing or missed; and thecall_duration
contains the duration of the call in seconds
In fact, we can check the first three elements for each user
[5]:
data.drop_duplicates(['user','call_duration']).groupby('user').head(3)
[5]:
user | device | time | call_type | call_duration | datetime | |
---|---|---|---|---|---|---|
2020-01-09 02:08:03.895999908+02:00 | jd9INuQ5BBlW | 3p83yASkOb_B | 1.578528e+09 | incoming | 1079 | 2020-01-09 02:08:03.895999908+02:00 |
2020-01-09 02:49:44.969000101+02:00 | jd9INuQ5BBlW | 3p83yASkOb_B | 1.578531e+09 | outgoing | 174 | 2020-01-09 02:49:44.969000101+02:00 |
2020-01-09 02:22:57.168999910+02:00 | jd9INuQ5BBlW | 3p83yASkOb_B | 1.578529e+09 | outgoing | 890 | 2020-01-09 02:22:57.168999910+02:00 |
2019-08-08 22:32:25.256999969+03:00 | iGyXetHE3S8u | Cq9vueHh3zVs | 1.565293e+09 | incoming | 1217 | 2019-08-08 22:32:25.256999969+03:00 |
2019-08-08 22:53:35.107000113+03:00 | iGyXetHE3S8u | Cq9vueHh3zVs | 1.565294e+09 | incoming | 383 | 2019-08-08 22:53:35.107000113+03:00 |
2019-08-08 22:31:34.539999962+03:00 | iGyXetHE3S8u | Cq9vueHh3zVs | 1.565293e+09 | incoming | 1142 | 2019-08-08 22:31:34.539999962+03:00 |
Sometimes the data may come in a disordered manner, so just to make sure, let’s order the dataframe and compare the results. We will use the columns “user” and “datetime” since we would like to order the information according to firstly, participants, and then, by time in order of happening. Luckily, in our dataframe, the index and datetime are the same.
[6]:
data.sort_values(by=['user', 'datetime'], inplace=True)
data.drop_duplicates(['user','call_duration']).groupby('user').head(3)
[6]:
user | device | time | call_type | call_duration | datetime | |
---|---|---|---|---|---|---|
2019-08-08 22:31:34.539999962+03:00 | iGyXetHE3S8u | Cq9vueHh3zVs | 1.565293e+09 | incoming | 1142 | 2019-08-08 22:31:34.539999962+03:00 |
2019-08-08 22:32:25.256999969+03:00 | iGyXetHE3S8u | Cq9vueHh3zVs | 1.565293e+09 | incoming | 1217 | 2019-08-08 22:32:25.256999969+03:00 |
2019-08-08 22:43:45.834000111+03:00 | iGyXetHE3S8u | Cq9vueHh3zVs | 1.565293e+09 | incoming | 1170 | 2019-08-08 22:43:45.834000111+03:00 |
2020-01-09 01:55:16.996000051+02:00 | jd9INuQ5BBlW | 3p83yASkOb_B | 1.578528e+09 | outgoing | 1256 | 2020-01-09 01:55:16.996000051+02:00 |
2020-01-09 02:06:09.790999889+02:00 | jd9INuQ5BBlW | 3p83yASkOb_B | 1.578528e+09 | outgoing | 271 | 2020-01-09 02:06:09.790999889+02:00 |
2020-01-09 02:08:03.895999908+02:00 | jd9INuQ5BBlW | 3p83yASkOb_B | 1.578528e+09 | incoming | 1079 | 2020-01-09 02:08:03.895999908+02:00 |
By comparing the last two dataframes, we can see that sorting the values was a good move. For example, in the unsorted dataframe, the earliest date for the user iGyXetHE3S8u was 2019-08-08 22:32:25; instead, for the sorted dataframe, the earliest date for the user iGyXetHE3S8u is 2019-08-08 22:31:34. Small differences, but still important.
* TIP! Data format requirements (or what should our data look like)
Data can take other shapes and formats. However, the niimpy
data scheme requires it to be in a certain shape. This means the dataframe needs to have at least the following characteristics: 1. One row per call. Each row should store information about one call only 2. Each row’s index should be a timestamp 3. There should be at least four columns: - index: date and time when the event happened (timestamp) - user: stores the user name whose data is analyzed. Each user should have a unique name
or hash (i.e. one hash for each unique user) - call_type: stores whether the call was incoming, outgoing, or missed. The exact words incoming, outgoing, and missed should be used - call_duration: the duration of the call in seconds 4. Columns additional to those listed in item 3 are allowed 5. The names of the columns do not need to be exactly “user”, “call_type” or “call_duration” as we can pass our own names in an argument (to be explained later).
Below is an example of a dataframe that complies with these minimum requirements
[7]:
example_dataschema = data[['user','call_type','call_duration']]
example_dataschema.head(3)
[7]:
user | call_type | call_duration | |
---|---|---|---|
2019-08-08 22:31:34.539999962+03:00 | iGyXetHE3S8u | incoming | 1142 |
2019-08-08 22:32:25.256999969+03:00 | iGyXetHE3S8u | incoming | 1217 |
2019-08-08 22:43:45.834000111+03:00 | iGyXetHE3S8u | incoming | 1170 |
4. Extracting features
There are two ways to extract features. We could use each function separately or we could use niimpy
’s ready-made wrapper. Both ways will require us to specify arguments to pass to the functions/wrapper in order to customize the way the functions work. These arguments are specified in dictionaries. Let’s first understand how to extract features using stand-alone functions.
4.1 Extract features using stand-alone functions
We can use niimpy
’s functions to compute communication features. Each function will require two inputs: - (mandatory) dataframe that must comply with the minimum requirements (see ‘* TIP! Data requirements above) - (optional) an argument dictionary for stand-alone functions
4.1.1 The argument dictionary for stand-alone functions (or how we specify the way a function works)
In this dictionary, we can input two main features to customize the way a stand-alone function works: - the name of the columns to be preprocessed: Since the dataframe may have different columns, we need to specify which column has the data we would like to be preprocessed. To do so, we can simply pass the name of the column to the argument communication_column_name
.
the way we resample: resampling options are specified in
niimpy
as a dictionary.niimpy
’s resampling and aggregating relies onpandas.DataFrame.resample
, so mastering the use of this pandas function will help us greatly inniimpy
’s preprocessing. Please familiarize yourself with the pandas resample function before continuing. Briefly, to use thepandas.DataFrame.resample
function, we need a rule. This rule states the intervals we would like to use to resample our data (e.g., 15-seconds, 30-minutes, 1-hour). Neverthless, we can input more details into the function to specify the exact sampling we would like. For example, we could use the close argument if we would like to specify which side of the interval is closed, or we could use the offset argument if we would like to start our binning with an offset, etc. There are plenty of options to use this command, so we strongly recommend havingpandas.DataFrame.resample
documentation at hand. All features for thepandas.DataFrame.resample
will be specified in a dictionary where keys are the arguments’ names for thepandas.DataFrame.resample
, and the dictionary’s values are the values for each of these selected arguments. This dictionary will be passed as a value to the keyresample_args
inniimpy
.
Let’s see some basic examples of these dictionaries:
[8]:
feature_dict1:{"communication_column_name":"call_duration","resample_args":{"rule":"1D"}}
feature_dict2:{"communication_column_name":"random_name","resample_args":{"rule":"30T"}}
feature_dict3:{"communication_column_name":"other_name","resample_args":{"rule":"45T","origin":"end"}}
Here, we have three basic feature dictionaries.
feature_dict1
will be used to analyze the data stored in the columncall_duration
in our dataframe. The data will be binned in one day periodsfeature_dict2
will be used to analyze the data stored in the columnrandom_name
in our dataframe. The data will be aggregated in 30-minutes binsfeature_dict3
will be used to analyze the data stored in the columnother_name
in our dataframe. The data will be binned in 45-minutes bins, but the binning will start from the last timestamp in the dataframe.
Default values: if no arguments are passed, niimpy
’s default values are “call_duration” for the communication_column_name, and 30-min aggregation bins.
4.1.2 Using the functions
Now that we understand how the functions are customized, it is time we compute our first communication feature. Suppose that we are interested in extracting the total duration of outgoing calls every 20 minutes. We will need niimpy
’s call_duration_total
function, the data, and we will also need to create a dictionary to customize our function. Let’s create the dictionary first
[9]:
function_features={"communication_column_name":"call_duration","resample_args":{"rule":"20T"}}
Now let’s use the function to preprocess the data.
[10]:
my_call_duration = com.call_duration_total(data, function_features)
Let’s look at some values for one of the subjects.
[11]:
my_call_duration[my_call_duration["user"] == "jd9INuQ5BBlW"]
[11]:
device | user | outgoing_duration_total | incoming_duration_total | missed_duration_total | |
---|---|---|---|---|---|
2020-01-09 01:40:00+02:00 | 3p83yASkOb_B | jd9INuQ5BBlW | 1256.0 | 0.0 | 0.0 |
2020-01-09 02:00:00+02:00 | 3p83yASkOb_B | jd9INuQ5BBlW | 2192.0 | 1079.0 | 0.0 |
2020-01-09 02:20:00+02:00 | 3p83yASkOb_B | jd9INuQ5BBlW | 3696.0 | 4650.0 | 0.0 |
2020-01-09 02:40:00+02:00 | 3p83yASkOb_B | jd9INuQ5BBlW | 174.0 | 645.0 | 0.0 |
2020-01-09 03:00:00+02:00 | 3p83yASkOb_B | jd9INuQ5BBlW | 0.0 | 269.0 | 0.0 |
Let’s remember how the original data looked like for this subject
[12]:
data[data["user"]=="jd9INuQ5BBlW"].head(7)
[12]:
user | device | time | call_type | call_duration | datetime | |
---|---|---|---|---|---|---|
2020-01-09 01:55:16.996000051+02:00 | jd9INuQ5BBlW | 3p83yASkOb_B | 1.578528e+09 | outgoing | 1256 | 2020-01-09 01:55:16.996000051+02:00 |
2020-01-09 02:06:09.790999889+02:00 | jd9INuQ5BBlW | 3p83yASkOb_B | 1.578528e+09 | outgoing | 271 | 2020-01-09 02:06:09.790999889+02:00 |
2020-01-09 02:08:03.895999908+02:00 | jd9INuQ5BBlW | 3p83yASkOb_B | 1.578528e+09 | incoming | 1079 | 2020-01-09 02:08:03.895999908+02:00 |
2020-01-09 02:10:06.573999882+02:00 | jd9INuQ5BBlW | 3p83yASkOb_B | 1.578529e+09 | missed | 0 | 2020-01-09 02:10:06.573999882+02:00 |
2020-01-09 02:11:37.648999929+02:00 | jd9INuQ5BBlW | 3p83yASkOb_B | 1.578529e+09 | outgoing | 1070 | 2020-01-09 02:11:37.648999929+02:00 |
2020-01-09 02:12:31.164000034+02:00 | jd9INuQ5BBlW | 3p83yASkOb_B | 1.578529e+09 | outgoing | 851 | 2020-01-09 02:12:31.164000034+02:00 |
2020-01-09 02:21:45.877000093+02:00 | jd9INuQ5BBlW | 3p83yASkOb_B | 1.578529e+09 | incoming | 1489 | 2020-01-09 02:21:45.877000093+02:00 |
We see that the bins are indeed 20-minutes bins, however, they are adjusted to fixed, predetermined intervals, i.e. the bin does not start on the time of the first datapoint. Instead, pandas
starts the binning at 00:00:00 of everyday and counts 20-minutes intervals from there.
If we want the binning to start from the first datapoint in our dataset, we need the origin parameter and a for loop.
[13]:
users = list(data['user'].unique())
results = []
for user in users:
start_time = data[data["user"]==user].index.min()
function_features={"communication_column_name":"call_duration","resample_args":{"rule":"20T","origin":start_time}}
results.append(com.call_duration_total(data[data["user"]==user], function_features))
my_call_duration = pd.concat(results)
[14]:
my_call_duration
[14]:
device | user | outgoing_duration_total | incoming_duration_total | missed_duration_total | |
---|---|---|---|---|---|
2019-08-09 07:11:34.539999962+03:00 | Cq9vueHh3zVs | iGyXetHE3S8u | 1322.0 | 0 | 0.0 |
2019-08-09 07:31:34.539999962+03:00 | Cq9vueHh3zVs | iGyXetHE3S8u | 959.0 | 1034 | 0.0 |
2019-08-09 07:51:34.539999962+03:00 | Cq9vueHh3zVs | iGyXetHE3S8u | 0.0 | 921 | 0.0 |
2019-08-09 08:11:34.539999962+03:00 | Cq9vueHh3zVs | iGyXetHE3S8u | 0.0 | 0 | 0.0 |
2019-08-09 08:31:34.539999962+03:00 | Cq9vueHh3zVs | iGyXetHE3S8u | 0.0 | 0 | 0.0 |
... | ... | ... | ... | ... | ... |
2019-08-09 06:51:34.539999962+03:00 | Cq9vueHh3zVs | iGyXetHE3S8u | 0.0 | 0 | 0.0 |
2020-01-09 01:55:16.996000051+02:00 | 3p83yASkOb_B | jd9INuQ5BBlW | 3448.0 | 1079 | 0.0 |
2020-01-09 02:15:16.996000051+02:00 | 3p83yASkOb_B | jd9INuQ5BBlW | 3078.0 | 1897 | 0.0 |
2020-01-09 02:35:16.996000051+02:00 | 3p83yASkOb_B | jd9INuQ5BBlW | 792.0 | 3398 | 0.0 |
2020-01-09 02:55:16.996000051+02:00 | 3p83yASkOb_B | jd9INuQ5BBlW | 0.0 | 269 | 0.0 |
319 rows × 5 columns
4.2 Extract features using the wrapper
We can use niimpy
’s ready-made wrapper to extract one or several features at the same time. The wrapper will require two inputs: - (mandatory) dataframe that must comply with the minimum requirements (see ‘* TIP! Data requirements above) - (optional) an argument dictionary for wrapper
4.2.1 The argument dictionary for wrapper (or how we specify the way the wrapper works)
This argument dictionary will use dictionaries created for stand-alone functions. If you do not know how to create those argument dictionaries, please read the section 4.1.1 The argument dictionary for stand-alone functions (or how we specify the way a function works) first.
The wrapper dictionary is simple. Its keys are the names of the features we want to compute. Its values are argument dictionaries created for each stand-alone function we will employ. Let’s see some examples of wrapper dictionaries:
[15]:
wrapper_features1 = {com.call_duration_total:{"communication_column_name":"call_duration","resample_args":{"rule":"1D"}},
com.call_count:{"communication_column_name":"call_duration","resample_args":{"rule":"1D"}}}
wrapper_features1
will be used to analyze two features,call_duration_total
andcall_count
. For the feature call_duration_total, we will use the data stored in the columncall_duration
in our dataframe and the data will be binned in one day periods. For the feature call_count, we will use the data stored in the columncall_duration
in our dataframe and the data will be binned in one day periods.
[16]:
wrapper_features2 = {com.call_duration_mean:{"communication_column_name":"random_name","resample_args":{"rule":"1D"}},
com.call_duration_median:{"communication_column_name":"random_name","resample_args":{"rule":"5H","offset":"5min"}}}
wrapper_features2
will be used to analyze two features,call_duration_mean
andcall_duration_median
. For the feature call_duration_mean, we will use the data stored in the columnrandom_name
in our dataframe and the data will be binned in one day periods. For the feature call_duration_median, we will use the data stored in the columnrandom_name
in our dataframe and the data will be binned in 5-hour periods with a 5-minute offset.
[17]:
wrapper_features3 = {com.call_duration_total:{"communication_column_name":"one_name","resample_args":{"rule":"1D","offset":"5min"}},
com.call_count:{"communication_column_name":"one_name","resample_args":{"rule":"5H"}},
com.call_duration_mean:{"communication_column_name":"another_name","resample_args":{"rule":"30T","origin":"end_day"}}}
wrapper_features3
will be used to analyze three features,call_duration_total
,call_count
, andcall_duration_mean
. For the feature call_duration_total, we will use the data stored in the columnone_name
and the data will be binned in one day periods with a 5-min offset. For the feature call_count, we will use the data stored in the columnone_name
in our dataframe and the data will be binned in 5-hour periods. Finally, for the feature call_duration_mean, we will use the data stored in the columnanother_name
in our dataframe and the data will be binned in 30-minute periods and the origin of the bins will be the ceiling midnight of the last day.
Default values: if no arguments are passed, niimpy
’s default values are “call_duration” for the communication_column_name, and 30-min aggregation bins. Moreover, the wrapper will compute all the available functions in absence of the argument dictionary.
4.2.2 Using the wrapper
Now that we understand how the wrapper is customized, it is time we compute our first communication feature using the wrapper. Suppose that we are interested in extracting the call total duration every 20 minutes. We will need niimpy
’s extract_features_comms
function, the data, and we will also need to create a dictionary to customize our function. Let’s create the dictionary first
[18]:
wrapper_features1 = {com.call_duration_total:{"communication_column_name":"call_duration","resample_args":{"rule":"20T"}}}
Now let’s use the wrapper
[19]:
results_wrapper = com.extract_features_comms(data, features=wrapper_features1)
results_wrapper.head(5)
computing <function call_duration_total at 0x7cb25e6ecf40>...
[19]:
device | user | outgoing_duration_total | incoming_duration_total | missed_duration_total | |
---|---|---|---|---|---|
2020-01-09 01:40:00+02:00 | 3p83yASkOb_B | jd9INuQ5BBlW | 1256.0 | 0.0 | 0.0 |
2020-01-09 02:00:00+02:00 | 3p83yASkOb_B | jd9INuQ5BBlW | 2192.0 | 1079.0 | 0.0 |
2020-01-09 02:20:00+02:00 | 3p83yASkOb_B | jd9INuQ5BBlW | 3696.0 | 4650.0 | 0.0 |
2020-01-09 02:40:00+02:00 | 3p83yASkOb_B | jd9INuQ5BBlW | 174.0 | 645.0 | 0.0 |
2019-08-09 07:00:00+03:00 | Cq9vueHh3zVs | iGyXetHE3S8u | 1322.0 | 0.0 | 0.0 |
Our first attempt was succesful. Now, let’s try something more. Let’s assume we want to compute the call_duration and call_count in 20-minutes bin.
[20]:
wrapper_features2 = {com.call_duration_total:{"communication_column_name":"call_duration","resample_args":{"rule":"20T"}},
com.call_count:{"communication_column_name":"call_duration","resample_args":{"rule":"20T"}}}
results_wrapper = com.extract_features_comms(data, features=wrapper_features2)
results_wrapper.head(5)
computing <function call_duration_total at 0x7cb25e6ecf40>...
computing <function call_count at 0x7cb25e6ed1c0>...
[20]:
device | user | outgoing_duration_total | incoming_duration_total | missed_duration_total | outgoing_count | incoming_count | missed_count | |
---|---|---|---|---|---|---|---|---|
2020-01-09 01:40:00+02:00 | 3p83yASkOb_B | jd9INuQ5BBlW | 1256.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
2020-01-09 02:00:00+02:00 | 3p83yASkOb_B | jd9INuQ5BBlW | 2192.0 | 1079.0 | 0.0 | 3.0 | 1.0 | 1.0 |
2020-01-09 02:20:00+02:00 | 3p83yASkOb_B | jd9INuQ5BBlW | 3696.0 | 4650.0 | 0.0 | 5.0 | 4.0 | 0.0 |
2020-01-09 02:40:00+02:00 | 3p83yASkOb_B | jd9INuQ5BBlW | 174.0 | 645.0 | 0.0 | 1.0 | 1.0 | 0.0 |
2019-08-09 07:00:00+03:00 | Cq9vueHh3zVs | iGyXetHE3S8u | 1322.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
Great! Another successful attempt. We see from the results that more columns were added with the required calculations. This is how the wrapper works when all features are computed with the same bins. Now, let’s see how the wrapper performs when each function has different binning requirements. Let’s assume we need to compute the call_duration_mean every day, and the call_duration_median every 5 hours with an offset of 5 minutes.
[21]:
wrapper_features3 = {com.call_duration_mean:{"communication_column_name":"call_duration","resample_args":{"rule":"1D"}},
com.call_duration_median:{"communication_column_name":"call_duration","resample_args":{"rule":"5H","offset":"5min"}}}
results_wrapper = com.extract_features_comms(data, features=wrapper_features3)
results_wrapper.head(5)
computing <function call_duration_mean at 0x7cb25e6ecfe0>...
computing <function call_duration_median at 0x7cb25e6ed080>...
[21]:
device | user | outgoing_duration_mean | incoming_duration_mean | missed_duration_mean | outgoing_duration_median | incoming_duration_median | missed_duration_median | |
---|---|---|---|---|---|---|---|---|
2020-01-09 00:00:00+02:00 | 3p83yASkOb_B | jd9INuQ5BBlW | 731.8 | 949.000000 | 0.0 | NaN | NaN | NaN |
2019-08-09 00:00:00+03:00 | Cq9vueHh3zVs | iGyXetHE3S8u | 1140.5 | 651.666667 | 0.0 | NaN | NaN | NaN |
2019-08-10 00:00:00+03:00 | Cq9vueHh3zVs | iGyXetHE3S8u | 1363.0 | 1298.000000 | 0.0 | NaN | NaN | NaN |
2019-08-11 00:00:00+03:00 | Cq9vueHh3zVs | iGyXetHE3S8u | 0.0 | 0.000000 | 0.0 | NaN | NaN | NaN |
2019-08-12 00:00:00+03:00 | Cq9vueHh3zVs | iGyXetHE3S8u | 209.0 | 715.000000 | 0.0 | NaN | NaN | NaN |
[22]:
results_wrapper.tail(5)
[22]:
device | user | outgoing_duration_mean | incoming_duration_mean | missed_duration_mean | outgoing_duration_median | incoming_duration_median | missed_duration_median | |
---|---|---|---|---|---|---|---|---|
2019-08-12 09:05:00+03:00 | Cq9vueHh3zVs | iGyXetHE3S8u | NaN | NaN | NaN | 0.0 | 0.0 | 0.0 |
2019-08-12 14:05:00+03:00 | Cq9vueHh3zVs | iGyXetHE3S8u | NaN | NaN | NaN | 0.0 | 0.0 | 0.0 |
2019-08-12 19:05:00+03:00 | Cq9vueHh3zVs | iGyXetHE3S8u | NaN | NaN | NaN | 0.0 | 715.0 | 0.0 |
2019-08-13 00:05:00+03:00 | Cq9vueHh3zVs | iGyXetHE3S8u | NaN | NaN | NaN | 0.0 | 0.0 | 0.0 |
2019-08-13 05:05:00+03:00 | Cq9vueHh3zVs | iGyXetHE3S8u | NaN | NaN | NaN | 0.0 | 591.0 | 0.0 |
The output is once again a dataframe. In this case, two aggregations are shown. The first one is the daily aggregation computed for the call_duration_mean
feature (head). The second one is the 5-hour aggregation period with 5-min offset for the call_duration_median
(tail). We must note that because the call_duration_median
feature is not required to be aggregated daily, the daily aggregation timestamps have a NaN value. Similarly, because the call_duration_mean
is not required
to be aggregated in 5-hour windows, its values are NaN for all subjects.
4.2.3 Wrapper and its default option
The default option will compute all features in 30-minute aggregation windows. To use the extract_features_comms
function with its default options, simply call the function.
[23]:
default = com.extract_features_comms(data, features=None)
computing <function call_duration_total at 0x7cb25e6ecf40>...
computing <function call_duration_mean at 0x7cb25e6ecfe0>...
computing <function call_duration_median at 0x7cb25e6ed080>...
computing <function call_duration_std at 0x7cb25e6ed120>...
computing <function call_count at 0x7cb25e6ed1c0>...
computing <function call_outgoing_incoming_ratio at 0x7cb25e6ed260>...
computing <function call_distribution at 0x7cb25e6ed300>...
computing <function message_count at 0x7cb25e6ed3a0>...
computing <function message_outgoing_incoming_ratio at 0x7cb25e6ed440>...
computing <function message_distribution at 0x7cb25e6ed4e0>...
The function prints the computed features so you can track its process. Now let’s have a look at the outputs
[24]:
default.head()
[24]:
device | user | outgoing_duration_total | incoming_duration_total | missed_duration_total | outgoing_duration_mean | incoming_duration_mean | missed_duration_mean | outgoing_duration_median | incoming_duration_median | missed_duration_median | outgoing_duration_std | incoming_duration_std | missed_duration_std | outgoing_count | incoming_count | missed_count | outgoing_incoming_ratio | distribution | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2020-01-09 01:30:00+02:00 | 3p83yASkOb_B | jd9INuQ5BBlW | 1256.0 | 0.0 | 0.0 | 1256.000000 | 0.000000 | 0.0 | 1256.0 | 0.0 | 0.0 | 0.000000 | 0.000000 | 0.0 | 1.0 | 0.0 | 0.0 | inf | NaN |
2020-01-09 02:00:00+02:00 | 3p83yASkOb_B | jd9INuQ5BBlW | 5270.0 | 2976.0 | 0.0 | 752.857143 | 992.000000 | 0.0 | 851.0 | 1079.0 | 0.0 | 443.087060 | 545.726122 | 0.0 | 7.0 | 3.0 | 1.0 | 2.333333 | 0.888889 |
2020-01-09 02:30:00+02:00 | 3p83yASkOb_B | jd9INuQ5BBlW | 792.0 | 3398.0 | 0.0 | 396.000000 | 1132.666667 | 0.0 | 396.0 | 1264.0 | 0.0 | 313.955411 | 437.058730 | 0.0 | 2.0 | 3.0 | 0.0 | 0.666667 | NaN |
2019-08-09 07:00:00+03:00 | Cq9vueHh3zVs | iGyXetHE3S8u | 1322.0 | 0.0 | 0.0 | 1322.000000 | 0.000000 | 0.0 | 1322.0 | 0.0 | 0.0 | 0.000000 | 0.000000 | 0.0 | 1.0 | 0.0 | 0.0 | inf | 0.833333 |
2019-08-09 07:30:00+03:00 | Cq9vueHh3zVs | iGyXetHE3S8u | 959.0 | 1824.0 | 0.0 | 959.000000 | 912.000000 | 0.0 | 959.0 | 912.0 | 0.0 | 0.000000 | 172.534055 | 0.0 | 1.0 | 2.0 | 1.0 | 0.500000 | NaN |
4.3 SMS computations
niimpy
includes one function to count the outgoing and incoming SMS. This function is not automatically called by extract_features_comms
, but it can be used as a standalone. Let’s see a quick example where we will upload the SMS data and preprocess it.
[25]:
data = niimpy.read_csv(config.MULTIUSER_AWARE_MESSAGES_PATH, tz='Europe/Helsinki')
data.head()
[25]:
user | device | time | message_type | datetime | |
---|---|---|---|---|---|
2020-01-09 02:34:46.644999981+02:00 | jd9INuQ5BBlW | 3p83yASkOb_B | 1.578530e+09 | incoming | 2020-01-09 02:34:46.644999981+02:00 |
2020-01-09 02:34:58.802999973+02:00 | jd9INuQ5BBlW | 3p83yASkOb_B | 1.578530e+09 | outgoing | 2020-01-09 02:34:58.802999973+02:00 |
2020-01-09 02:35:37.611000061+02:00 | jd9INuQ5BBlW | 3p83yASkOb_B | 1.578530e+09 | outgoing | 2020-01-09 02:35:37.611000061+02:00 |
2020-01-09 02:55:40.640000105+02:00 | jd9INuQ5BBlW | 3p83yASkOb_B | 1.578531e+09 | outgoing | 2020-01-09 02:55:40.640000105+02:00 |
2020-01-09 02:55:50.914000034+02:00 | jd9INuQ5BBlW | 3p83yASkOb_B | 1.578531e+09 | incoming | 2020-01-09 02:55:50.914000034+02:00 |
[26]:
sms = com.message_count(data, config={"communication_column_name": "message_type", "call_type_column": "message_type"})
sms
[26]:
device | user | outgoing_count | incoming_count | |
---|---|---|---|---|
2020-01-09 02:30:00+02:00 | 3p83yASkOb_B | jd9INuQ5BBlW | 5 | 5.0 |
2019-08-13 08:30:00+03:00 | Cq9vueHh3zVs | iGyXetHE3S8u | 1 | 1.0 |
2019-08-13 09:00:00+03:00 | Cq9vueHh3zVs | iGyXetHE3S8u | 0 | 0.0 |
2019-08-13 09:30:00+03:00 | Cq9vueHh3zVs | iGyXetHE3S8u | 2 | 1.0 |
2019-08-13 10:00:00+03:00 | Cq9vueHh3zVs | iGyXetHE3S8u | 0 | 0.0 |
... | ... | ... | ... | ... |
2020-01-09 12:00:00+02:00 | OWd1Uau8POix | jd9INuQ5BBlW | 0 | 0.0 |
2020-01-09 12:30:00+02:00 | OWd1Uau8POix | jd9INuQ5BBlW | 0 | 3.0 |
2020-01-09 13:00:00+02:00 | OWd1Uau8POix | jd9INuQ5BBlW | 0 | 0.0 |
2020-01-09 13:30:00+02:00 | OWd1Uau8POix | jd9INuQ5BBlW | 0 | 0.0 |
2020-01-09 14:00:00+02:00 | OWd1Uau8POix | jd9INuQ5BBlW | 2 | 6.0 |
114 rows × 4 columns
Similar to the calls functions, we need to define the config
dictionary. Likewise, if we leave it empty, then all data is aggregated in 30-minutes bins. We see that the function also differentiates between the incoming and outgoing messages. Let’s quickly summarize the data requirements for SMS
* TIP! Data format requirements for SMS (special case)
Data can take other shapes and formats. However, the niimpy
data scheme requires it to be in a certain shape. This means the dataframe needs to have at least the following characteristics: 1. One row per call. Each row should store information about one call only 2. Each row’s index should be a timestamp 3. There should be at least four columns: - index: date and time when the event happened (timestamp) - user: stores the user name whose data is analyzed. Each user should have a unique name
or hash (i.e. one hash for each unique user) - message_type: determines if the message was sent (outgoing) or received (incoming) 4. Columns additional to those listed in item 3 are allowed 5. The names of the columns do not need to be exactly “user”, “message_type”
5. Implementing own features
If none of the provided functions suits well, We can implement our own customized features easily. To do so, we need to define a function that accepts a dataframe and returns a dataframe. The returned object should be indexed by user and timestamps (multiindex). To make the feature readily available in the default options, we need add the call prefix to the new function (e.g. call_my-new-feature
). Let’s assume we need a new function that counts all calls, independent of their direction
(outgoing, incoming, etc.). Let’s first define the function
[27]:
def call_count_all(df,config=None):
if not "communication_column_name" in config:
col_name = "call_duration"
else:
col_name = config["communication_column_name"]
if not "resample_args" in config.keys():
config["resample_args"] = {"rule":"30T"}
if len(df)>0:
result = df.groupby(["user", "device"])[col_name].resample(**config["resample_args"]).count()
result.rename("call_count_all", inplace=True)
result = result.to_frame()
result = result.reset_index(["user", "device"])
return result
return None
Then, we can call our new function in the stand-alone way or using the extract_features_comms
function. Because the stand-alone way is the common way to call functions in python, we will not show it. Instead, we will show how to integrate this new function to the wrapper. Let’s read again the data and assume we want the default behavior of the wrapper.
[28]:
data = niimpy.read_csv(config.MULTIUSER_AWARE_CALLS_PATH, tz='Europe/Helsinki')
customized_features = com.extract_features_comms(data, features={call_count_all: {}})
computing <function call_count_all at 0x7cb35cd11440>...
[29]:
customized_features.head()
[29]:
device | user | call_count_all | |
---|---|---|---|
2019-08-08 22:30:00+03:00 | Cq9vueHh3zVs | iGyXetHE3S8u | 5 |
2019-08-08 23:00:00+03:00 | Cq9vueHh3zVs | iGyXetHE3S8u | 0 |
2019-08-08 23:30:00+03:00 | Cq9vueHh3zVs | iGyXetHE3S8u | 0 |
2019-08-09 00:00:00+03:00 | Cq9vueHh3zVs | iGyXetHE3S8u | 0 |
2019-08-09 00:30:00+03:00 | Cq9vueHh3zVs | iGyXetHE3S8u | 0 |