分类:Cw2vec: Learning Chinese Word Embeddings with Stroke n-gram Information

来自Big Physics
跳转至: 导航搜索


Shaosheng Cao, Wei Lu, Jun Zhou and Xiaolong Li, cw2vec: Learning Chinese Word Embeddings with Stroke n-gram Information, In Proceedings of AAAI 2018

Abstract

In this paper, we propose cw2vec, a novel method for learning Chinese word embeddings. It is based on our observation that exploiting stroke-level information is crucial for improving the learning of Chinese word embeddings. Specifically, we design a minimalist approach to exploit such features, by using stroke n-grams, which capture semantic and morphological level information of Chinese words. Through qualitative analysis, we demonstrate that our model is able to extract semantic information that cannot be captured by existing methods. Empirical results on the word similarity, word analogy, text classification and named entity recognition tasks show that the proposed approach consistently outperforms state-of-the-art approaches such as word-based word2vec and GloVe, character-based CWE, component-based JWE and pixel-based GWE.

总结和评论

Word2vec算法word2vec[1][2],基于一个字上下文经常在一起出现的其他字的频率,可以用来发现词之间的语义联系,并用于解决自然语言处理的问题。在语音语言上,词的内部结构通常没有丰富的意义,基本上就是表示一个读音。但是,汉字是特殊的,其内部结构很多时候表示了含义上的联系。例如,妈、妹、姐、姑、奶都有“女”字旁,而且它们这几个字确实和“女”有联系。这个工作[3]的研究者就注意到了这个联系,从而把汉字打开用更加基本的结构笔画来发现汉字的含以上的联系。

这是这个工作的简短介绍[4]。实际上,这个工作在上面的思想上,走的更粗暴和极端——直接用笔画(被作者分成了五种)的n-gram来代表汉字——然后训练这些n-gram的矢量表示。当然,这个时候,真实的汉字的表示也就训练出来了——毕竟真实汉字不过就是这些笔画n-grams里面的一个。

进一步研究

进一步,自然可以来看,如果我们不按照笔画,而是按照我们已经建成的汉字网络的拆分方式——把一个字拆分成为直接联系的下一层结构——来训练,效果会怎样?

例如,首先,我们把原始的文本做一次转化,转化成每一个汉字都是下一级的子结构,例如照->火昭,但不是照->火口日刀,当然如果出现昭则做昭->日昭的替换。接着,在替换完成的文本上做以字为单位(或者以n-gram为单位)的word2vec完成矢量化。如果是以字为单位的,则文本中出现过的处于结构上最高层的字没有矢量表示(已经被拆掉了,在替换之后的文本中没有了),因此,还得想办法再一次得到这些最高层字的矢量。这个可以用简单矢量加法得到,或者用再一次运行针对这些字(相当于替换后文本中的词)的局部的word2vec。具体可以参考[5][6].

参考文献

  1. T Mikolov, K Chen, G Corrado, J Dean, Efficient estimation of word representations in vector space, arXiv preprint arXiv:1301.3781.
  2. T Mikolov, I Sutskever, K Chen, GS Corrado, J Dean, Distributed representations of words and phrases and their compositionality, Advances in neural information processing systems, 3111-3119.
  3. Shaosheng Cao, Wei Lu, Jun Zhou and Xiaolong Li, cw2vec: Learning Chinese Word Embeddings with Stroke n-gram Information, In Proceedings of AAAI 2018, http://www.statnlp.org/wp-content/uploads/papers/2018/cw2vec/cw2vec.pdf
  4. jaylin008, word2vec与cw2vec的数学原理, https://www.jianshu.com/p/f258d0c5c317
  5. Jinxing Yu Xun Jian Hao Xin Yangqiu Song, Joint Embeddings of Chinese Words, Characters, and Fine-grained Subcharacter Components. EMNLP.
  6. Xinxiong Chen, Lei Xu, Zhiyuan Liu, Maosong Sun, Huanbo Luan, Joint Learning of Character and Word Embeddings. IJCAI.

本分类目前不含有任何页面或媒体文件。