首页 > 解决方案 > 如何使用 tensorflow_io 的 IODataset?

问题描述

我正在尝试编写一个程序,该程序可以使用恶意 pcap 文件作为数据集并预测其他 pcaps 文件中是否包含恶意数据包。在对 Tensorflow 文档进行了一些挖掘之后,我找到了 TensorIO,但我不知道如何使用数据集创建模型并使用它进行预测。

这是我的代码:

%tensorflow_version 2.x
import tensorflow as tf
import numpy as np
from tensorflow import keras

try:
  import tensorflow_io as tfio
  import tensorflow_datasets as tfds
except:
  !pip install tensorflow-io
  !pip install tensorflow-datasets

import tensorflow_io as tfio
import tensorflow_datasets as tfds

# print(tf.__version__)

dataset = tfio.IODataset.from_pcap("dataset.pcap")
print(dataset) # <PcapIODataset shapes: ((), ()), types: (tf.float64, tf.string)>

(使用谷歌协作)

我尝试在网上寻找答案,但找不到任何答案。

标签: pythontensorflowtensorflow2.0pcaptensorflow-datasets

解决方案


我已经下载了两个 pcap 文件并将它们连接起来。后来我提取了 packet_timestamp 和 packet_data。要求您根据您的要求对 packet_data 进行预处理。如果您有任何要添加的标签,您可以添加到训练数据集(在下面的模型示例中,我创建了一个全为零的虚拟标签并添加为一列)。如果它在一个文件中,那么您可以将它们压缩到 pcap 文件中。传递(特征,标签)对的数据集是 and 所需要Model.fitModel.evaluate

以下是 packet_data 预处理的示例 -可能您可以修改为if packet_data is valid then labels = valid else malicious.

%tensorflow_version 2.x
import tensorflow as tf
import tensorflow_io as tfio 
import numpy as np

# Create an IODataset from a pcap file  
first_file = tfio.IODataset.from_pcap('/content/fuzz-2006-06-26-2594.pcap')
second_file = tfio.IODataset.from_pcap(['/content/fuzz-2006-08-27-19853.pcap'])

# Concatenate the Read Files
feature = first_file.concatenate(second_file)
# List for pcap 
packet_timestamp_list = []
packet_data_list = []

# some dummy labels
labels = []

packets_total = 0
for v in feature:
    (packet_timestamp, packet_data) = v
    packet_timestamp_list.append(packet_timestamp.numpy())
    packet_data_list.append(packet_data.numpy())
    labels.append(0)
    if packets_total == 0:
        assert np.isclose(
            packet_timestamp.numpy()[0], 1084443427.311224, rtol=1e-15
        )  # we know this is the correct value in the test pcap file
        assert (
            len(packet_data.numpy()[0]) == 62
        )  # we know this is the correct packet data buffer length in the test pcap file
    packets_total += 1
assert (
    packets_total == 43
)  # we know this is the correct number of packets in the test pcap file

下面是在模型中使用的示例 -该模型将不起作用,因为我没有处理字符串类型的 packet_data。根据您的要求进行预处理并在模型中使用。

%tensorflow_version 2.x
import tensorflow as tf
import tensorflow_io as tfio 
import numpy as np

# Create an IODataset from a pcap file  
first_file = tfio.IODataset.from_pcap('/content/fuzz-2006-06-26-2594.pcap')
second_file = tfio.IODataset.from_pcap(['/content/fuzz-2006-08-27-19853.pcap'])

# Concatenate the Read Files
feature = first_file.concatenate(second_file)

# List for pcap 
packet_timestamp = []
packet_data = []

# some dummy labels
labels = []

# add 0 as label. You can use your actual labels here
for v in feature:
  (timestamp, data) = v
  packet_timestamp.append(timestamp.numpy())
  packet_data.append(data.numpy())
  labels.append(0)

## Do the preprocessing of packet_data here

# Add labels to the training data
# Preprocess the packet_data to convert string to meaningful value and use here
train_ds = tf.data.Dataset.from_tensor_slices(((packet_timestamp,packet_data), labels))
# Set the batch size
train_ds = train_ds.shuffle(5000).batch(32)

##### PROGRAM WILL RUN SUCCESSFULLY TILL HERE. TO USE IN THE MODEL DO THE PREPROCESSING OF PACKET DATA AS EXPLAINED ### 

# Have defined some simple model
model = tf.keras.Sequential([
  tf.keras.layers.Flatten(),
  tf.keras.layers.Dense(100),
  tf.keras.layers.Dense(10)
])

model.compile(optimizer='adam',
              loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), 
              metrics=['accuracy'])

model.fit(train_ds, epochs=2)

希望这能回答你的问题。快乐学习。


推荐阅读