首页 > 解决方案 > pytorch where is Embedding "max_norm" implemented?

问题描述

The "embedding" class documentation https://pytorch.org/docs/stable/nn.html says

max_norm (float, optional) – If given, will renormalize the embedding vectors to have a norm lesser than this before extracting.

1) In my model, I use this embedding class as a parameter, not just as an input (the model learns the embedding.) In this case, I assume every time when updates happen, the embedding gets renormalized, not only when it's initialized. Is my understanding correct?

2) I wanted to confirm 1) by looking at the source, but I couldn't find the implementation in pytorch embedding class. https://pytorch.org/docs/stable/_modules/torch/nn/modules/sparse.html Can someone point me to the max_norm implementation?

标签: pytorch

解决方案


如果您在此处forward的 Embedding 类中看到函数,则有一个对torch.nn.functional.embedding的引用,它使用了此处的 cpp 文档中的 embedding_renorm_,这意味着它是一个 cpp 实现。在 pytorch repo 上的一些 github 搜索指向了这个文件(12)。

对 1 的回答是肯定的。2的答案如上。


推荐阅读