首页 > 解决方案 > 如何在二进制张量中找到相邻的真值组?

问题描述

我刚从TensorFlow(1.13)开始,我遇到了一个问题。我有一个来自神经网络的 3D 二进制张量。在这个张量中有真值的“组”。这是二进制张量中的相邻索引具有真值的地方。我想提取多个组,其中每个组存储真实值相邻的索引。

例如一个 2D 案例(导致两组):

[[ 0 1 1 0 0 ]
 [ 0 1 0 0 0 ]
 [ 0 0 0 0 0 ]
 [ 0 0 0 0 1 ]
 [ 0 1 1 1 0 ]]

我的目标是以下输出:

[[[0, 1], [0, 2], [1, 1]],
 [[3, 5], [4, 1], [4, 2], [4, 3]]]

我需要它来确定真正有价值的相邻组的中心,因此最终目标如下:

[[0.33, 1.33],
 [3.75, 2.5]]

我尝试在一定距离内的节点之间创建边。这给了我给定真值的相邻索引。对于[0, 1]这个结果的真值[0, 2][1, 1]。我有一个包含所有这些边缘的列表,但无法针对目标输出对这些边缘进行相应的分组。

像 k-means 聚类这样的东西可以工作,但为此我需要知道真值组的数量,这是未知的。

这是将numpy相邻索引集合成组的实现。

import numpy as np

arr = np.array([
    [0, 1],
    [0, 4],
    [1, 4],
    [1, 20],
    [3, 6],
    [6, 9],
    [9, 12],
    [5, 7],
])

# Goal [[0, 1, 4], [3, 6, 9, 12], [5, 7]]

group = []

def search(value, arr):

    nodes, = np.where(arr[:, 0] == value[-1])
    res = arr[nodes]

    if res.size != 0:
        sequence = np.concatenate((value, res), axis=None)
        sequence = np.unique(sequence)
        val = search(sequence, llist)

    return val


while True:

    # search iteratively:
    arr = np.reshape(arr, (-1, 2))
    next_node = search(arr[0], arr)

    group.append(next_node)

    # prevent searches to be restarted and give half results: remove them from array
    mask = np.isin(arr, next_node, invert=True)
    arr = arr[mask]
    if arr.size == 0:
        break

print(group)

我无法将其重写为可工作的 tensorflow 代码,尽管我现在什至不确定如何正确解决该问题。在过去的一周里,我已经几次摔断了头,希望你能帮助我。谢谢考虑!

标签: pythontensorflowrecursionindexinggrouping

解决方案


这是一些工作,但我设法用几个tf.while_loop. 这个想法很简单,只需要一个元素,找到相邻的元素,然后对那些元素做同样的事情,直到没有更多的元素,你称之为一个集群,然后继续另一个未分配的元素,依此类推,直到你没有任何元素更多元素。由于张量需要具有紧凑的形状,而不是使用诸如参差不齐的张量之类的东西我只是将输出格式化为两个张量,一个带有坐标,另一个带有所属簇的索引(如果需要,您可以从那里更改格式)。代码肯定不会很快,TensorFlow 循环通常不会很快,并且算法对每个最内层迭代进行二次比较,但至少它应该给你一个答案。

无论如何这里是代码:

import tensorflow as tf

def find_clusters(arr):
    # Find coordinates of ones
    coords = tf.where(tf.dtypes.cast(arr, tf.bool))
    s = tf.shape(coords)
    d = coords.shape[1]
    cluster_idx = tf.TensorArray(tf.int32, 0, element_shape=[None],
                                 dynamic_size=True, infer_shape=False)
    cluster_coords = tf.TensorArray(coords.dtype, 0, element_shape=[None, d],
                                    dynamic_size=True, infer_shape=False)
    i = tf.constant(0, tf.int32)
    i_step = tf.constant(0, tf.int32)
    _, _, _, cluster_idx, cluster_coords = tf.while_loop(
        # While there are unassigned coordinates
        lambda i, i_step, coords, cluster_idx, cluster_coords: tf.shape(coords)[0] > 0,
        # Find new cluster
        next_cluster,
        [i, i_step, coords, cluster_idx, cluster_coords],
        parallel_iterations=1,
        shape_invariants=[i.shape, i_step.shape, tf.TensorShape([None, d]),
                          tf.TensorShape(None), tf.TensorShape(None)])
    return cluster_idx.concat(), cluster_coords.concat()

def next_cluster(i, i_step, coords, cluster_idx, cluster_coords):
    current = coords[:1]
    coords = coords[1:]
    cluster_idx = cluster_idx.write(i_step, [i])
    cluster_coords = cluster_coords.write(i_step, current)
    i_step += 1
    s = tf.TensorShape([None, coords.shape[1]])
    i, i_step, coords, _, cluster_idx, cluster_coords = tf.while_loop(
        # While new elements are added to the cluster
        (lambda i, i_step, coords, current, cluster_idx, cluster_coords:
             tf.not_equal(tf.shape(current)[0], 0)),
        # Find new neighbors
        find_neighbors,
        [i, i_step, coords, current, cluster_idx, cluster_coords],
        parallel_iterations=1,
        shape_invariants=[i.shape, i_step.shape, s, s,
                          tf.TensorShape(None), tf.TensorShape(None)])
    return i + 1, i_step, coords, cluster_idx, cluster_coords

def find_neighbors(i, i_step, coords, current, cluster_idx, cluster_coords):
    # Find coordinates at distance exactly one of previous coordinates
    dist = tf.reduce_sum(tf.abs(tf.expand_dims(current, 1) - coords), axis=-1)
    is_close = tf.reduce_any(tf.equal(dist, 1), axis=0)
    # Split between neighbors and the rest
    coords, current = tf.dynamic_partition(coords, tf.dtypes.cast(is_close, tf.int32), 2)
    # Write newly found cluster coordinates
    cluster_idx = cluster_idx.write(i_step, tf.fill([tf.shape(current)[0]], i))
    cluster_coords = cluster_coords.write(i_step, current)
    i_step += 1
    return i, i_step, coords, current, cluster_idx, cluster_coords

# Test
with tf.Graph().as_default(), tf.Session() as sess:
    data = tf.constant([[0, 1, 1, 0, 0],
                        [0, 1, 0, 0, 0],
                        [0, 0, 0, 0, 0],
                        [0, 0, 0, 0, 1],
                        [0, 1, 1, 1, 0]])
    cluster_idx, cluster_coords = find_clusters(data)
    for cluster, coord in zip(*sess.run((cluster_idx, cluster_coords))):
        print(f'{coord}: cluster {cluster}')

输出:

[0 1]: cluster 0
[0 2]: cluster 0
[1 1]: cluster 0
[3 4]: cluster 1
[4 1]: cluster 2
[4 2]: cluster 2
[4 3]: cluster 2

编辑:

如果您确实希望输出为参差不齐的张量,这里有一个简单的函数可以将以前的格式转换为:

import tensorflow as tf

def clusters_to_ragged(cluster_idx, cluster_coords):
    d = cluster_idx[1:] - cluster_idx[:-1]
    s = tf.where(d > 0)[:, 0] + 1
    starts = tf.concat([[0], s], axis=0)
    limits = tf.concat([s, [tf.shape(d)[0] + 1]], axis=0)
    r = tf.ragged.range(starts, limits)
    return tf.gather(cluster_coords, r)

# Test
with tf.Graph().as_default(), tf.Session() as sess:
    # Result in previous format
    cluster_idx = tf.constant([0, 0, 0, 1, 2, 2, 2])
    cluster_coords = tf.constant([[0, 1],
                                  [0, 2],
                                  [1, 1],
                                  [3, 4],
                                  [4, 1],
                                  [4, 2],
                                  [4, 3]])
    ragged = clusters_to_ragged(cluster_idx, cluster_coords)
    print(*sess.run(ragged).to_list(), sep='\n')
    # [[0, 1], [0, 2], [1, 1]]
    # [[3, 4]]
    # [[4, 1], [4, 2], [4, 3]]

推荐阅读