本文主要是介绍Tencent_AILab_ChineseEmbedding.txt使用,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
正在做问答系统,看到腾讯正式开源一个大规模、高质量的中文词向量数据集Tencent_AILab_ChineseEmbedding.txt,简直喜极而泣。下载地址:https://ai.tencent.com/ailab/nlp/embedding.html ,里边有对数据集的介绍还有论文的下载地址。
迅速写了一个代码,用在我自己的问答系统中,效果嘛还在训练,初始几步的loss确实比之前随机初始化下降要快,将使用代码记录一下:
def loadEmbedding(sess, embeddingFile, word2id, embeddingSize):""" Initialize embeddings with pre-trained word2vec vectorsWill modify the embedding weights of the current loaded modelsess:会话embeddingFile:Tencent_AILab_ChineseEmbedding.txt的路径word2id:自己数据集中的word2idembeddingSize: 词向量的维度,我这里直接设置的200,和原始一样,低于200的采用我屏蔽掉的代码应该可以,我还没测"""with tf.name_scope('embedding_layer'):embedding = tf.get_variable('embedding',[len(word2id), embeddingSize])# New model, we load the pre-trained word2vec data and initialize embeddingsprint("Loading pre-trained word embeddings from %s " % embeddingFile)with open(embeddingFile, "r", encoding='ISO-8859-1') as f:header = f.readline()vocab_size, vector_size = map(int, header.split())initW = np.random.uniform(-0.25,0.25,(len(word2id), vector_size))for i in tqdm(range(vocab_size)):line = f.readline()lists = line.split(' ')word = lists[0]if word in word2id:number = map(float, lists[1:])number = list(number)vector = np.array(number)initW[word2id[word]] = vector# # PCA Decomposition to reduce word2vec dimensionality# if embeddingSize < vector_size:# U, s, Vt = np.linalg.svd(initW, full_matrices=False)# S = np.zeros((vector_size, vector_size), dtype=complex)# S[:vector_size, :vector_size] = np.diag(s)# initW = np.dot(U[:, :embeddingSize], S[:embeddingSize, :embeddingSize])# Initialize input and output embeddingssess.run(embedding.assign(initW))
这篇关于Tencent_AILab_ChineseEmbedding.txt使用的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!