在线时间:8:00-16:00
迪恩网络APP
随时随地掌握行业动态
扫描二维码
关注迪恩网络微信公众号
开源软件名称:OpenNE开源软件地址:https://gitee.com/mirrors/OpenNE开源软件介绍:OpenNE: An open source toolkit for Network EmbeddingThis repository provides a standard NE/NRL(Network Representation Learning)training and testing framework. In this framework, we unify the input and output interfaces of different NE models and provide scalable options for each model. Moreover, we implement typical NE models under this framework based on tensorflow, which enables these models to be trained with GPUs. We develop this toolkit according to the settings of DeepWalk. The implemented or modified models include DeepWalk, LINE, node2vec, GraRep, TADW, GCN, HOPE, GF, SDNE and LE. We will implement more representative NE models continuously according to our released NRL paper list. Specifically, we welcome other researchers to contribute NE models into this toolkit based on our framework. We will announce the contribution in this project. UsageInstallation
General OptionsYou can check out the other options available to use with OpenNE using: python -m openne --help
ExampleTo run "node2vec" on BlogCatalog network and evaluate the learned representations on multi-label node classification task, run the following command in the home directory of this project: python -m openne --method node2vec --label-file data/blogCatalog/bc_labels.txt --input data/blogCatalog/bc_adjlist.txt --graph-format adjlist --output vec_all.txt --q 0.25 --p 0.25 To run "gcn" on Cora network and evaluate the learned representations on multi-label node classification task, run the following command in the home directory of this project: python -m openne --method gcn --label-file data/cora/cora_labels.txt --input data/cora/cora_edgelist.txt --graph-format edgelist --feature-file data/cora/cora.features --epochs 200 --output vec_all.txt --clf-ratio 0.1 Specific OptionsDeepWalk and node2vec:
LINE:
GraRep:
TADW:
GCN:
GraphFactorization:
SDNE:
InputThe supported input format is an edgelist or an adjlist: edgelist: node1 node2 <weight_float, optional>adjlist: node n1 n2 n3 ... nk The graph is assumed to be undirected and unweighted by default. These options can be changed by setting the appropriate flags. If the model needs additional features, the supported feature input format is as follow (feature_i should be a float number): node feature_1 feature_2 ... feature_n OutputThe output file has n+1 lines for a graph with n nodes.The first line has the following format: num_of_nodes dim_of_representation The next n lines are as follows: node_id dim1 dim2 ... dimd where dim1, ... , dimd is the d-dimensional representation learned by OpenNE. EvaluationIf you want to evaluate the learned node representations, you can input the node labels. It will use a portion (default: 50%) of nodes to train a classifier and calculate F1-score on the rest dataset. The supported input label format is node label1 label2 label3... Embedding visualizationTo show how to apply dimension reduction methods like t-SNE and PCA to embedding visualization, we choose the 20 newsgroups dataset. Using the text feature, we built the news network by cd visualization_examplepython 20newsgroup.pytensorboard --logdir=log/ After running the tensorboard, visit Comparisons with other implementationsRunning environment: We show the node classification results of various methods in different datasets. We set representation dimension to 128, kstep=4 in GraRep. Note that, both GCN(a semi-supervised NE model) and TADW need additional text features as inputs. Thus, we evaluate these two models on Cora in which each node has text information. We use 10% labeled data to train GCN. BlogCatalog: 10312 nodes, 333983 edges, 39 labels, undirected:
Wiki (Wiki dataset is provided by LBC project. But the original link failed.): 2405 nodes, 17981 edges, 19 labels, directed:
Cora: 2708 nodes, 5429 edges, 7 labels, directed:
CitingIf you find OpenNE is useful for your research, please consider citing the following papers: @InProceedings{perozzi2014deepwalk, Title = {Deepwalk: Online learning of social representations}, Author = {Perozzi, Bryan and Al-Rfou, Rami and Skiena, Steven}, Booktitle = {Proceedings of KDD}, Year = {2014}, Pages = {701--710}}@InProceedings{tang2015line, Title = {Line: Large-scale information network embedding}, Author = {Tang, Jian and Qu, Meng and Wang, Mingzhe and Zhang, Ming and Yan, Jun and Mei, Qiaozhu}, Booktitle = {Proceedings of WWW}, Year = {2015}, Pages = {1067--1077}}@InProceedings{grover2016node2vec, Title = {node2vec: Scalable feature learning for networks}, Author = {Grover, Aditya and Leskovec, Jure}, Booktitle = {Proceedings of KDD}, Year = {2016}, Pages = {855--864}}@article{kipf2016semi, Title = {Semi-Supervised Classification with Graph Convolutional Networks}, Author = {Kipf, Thomas N and Welling, Max}, journal = {arXiv preprint arXiv:1609.02907}, Year = {2016}}@InProceedings{cao2015grarep, Title = {Grarep: Learning graph representations with global structural information}, Author = {Cao, Shaosheng and Lu, Wei and Xu, Qiongkai}, Booktitle = {Proceedings of CIKM}, Year = {2015}, Pages = {891--900}}@InProceedings{yang2015network, Title = {Network representation learning with rich text information}, Author = {Yang, Cheng and Liu, Zhiyuan and Zhao, Deli and Sun, Maosong and Chang, Edward}, Booktitle = {Proceedings of IJCAI}, Year = {2015}}@Article{tu2017network, Title = {Network representation learning: an overview}, Author = {TU, Cunchao and YANG, Cheng and LIU, Zhiyuan and SUN, Maosong}, Journal = {SCIENTIA SINICA Informationis}, Volume = {47}, Number = {8}, Pages = {980--996}, Year = {2017}}@inproceedings{ou2016asymmetric, title = {Asymmetric transitivity preserving graph embedding}, author = {Ou, Mingdong and Cui, Peng and Pei, Jian and Zhang, Ziwei and Zhu, Wenwu}, booktitle = {Proceedings of the 22nd ACM SIGKDD}, pages = {1105--1114}, year = {2016}, organization = {ACM}}@inproceedings{belkin2002laplacian, title = {Laplacian eigenmaps and spectral techniques for embedding and clustering}, author = {Belkin, Mikhail and Niyogi, Partha}, booktitle = {Advances in neural information processing systems}, pages = {585--591}, year = {2002}}@inproceedings{ahmed2013distributed, title = {Distributed large-scale natural graph factorization}, author = {Ahmed, Amr and Shervashidze, Nino and Narayanamurthy, Shravan and Josifovski, Vanja and Smola, Alexander J}, booktitle = {Proceedings of the 22nd international conference on World Wide Web}, pages = {37--48}, year = {2013}, organization = {ACM}}@inproceedings{wang2016structural, title = {Structural deep network embedding}, author = {Wang, Daixin and Cui, Peng and Zhu, Wenwu}, booktitle = {Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining}, pages = {1225--1234}, year = {2016}, organization = {ACM}} SponsorThis research is supported by Tencent, MSRA, NSFC and BBDM-Lab. |
请发表评论