publications
A growing collection of my publicantions.
2024
- ArxivA Survey on Game Playing Agents and Large Models: Methods, Applications, and ChallengesXu, Xinrun, Wang, Yuxin, Xu, Chaoyi, Ding, Ziluo, Jiang, Jiechuan, Ding, Zhiming, and Karlsson, Borje F.Arxiv 2024
The swift evolution of Large-scale Models (LMs), either language-focused or multi-modal, has garnered extensive attention in both academy and industry. But despite the surge in interest in this rapidly evolving area, there are scarce systematic reviews on their capabilities and potential in distinct impactful scenarios. This paper endeavours to help bridge this gap, offering a thorough examination of the current landscape of LM usage in regards to complex game playing scenarios and the challenges still open. Here, we seek to systematically review the existing architectures of LM-based Agents (LMAs) for games and summarize their commonalities, challenges, and any other insights. Furthermore, we present our perspective on promising future research avenues for the advancement of LMs in games. We hope to assist researchers in gaining a clear understanding of the field and to generate more interest in this highly impactful research direction. A corresponding resource, continuously updated, can be found in our GitHub repository.
2023
- NeurocomputingOpen-world Story Generation with Structured Knowledge Enhancement: A Comprehensive SurveyWang, Yuxin, Lin, Jieru, Yu, Zhiwei, Hu, Wei, and Karlsson, Borje F.Neurocomputing 2023
Storytelling and narrative are fundamental to human experience, intertwined with our social and cultural engagement. As such, researchers have long attempted to create systems that can generate stories automatically. In recent years, powered by deep learning and massive data resources, automatic story generation has shown significant advances. However, considerable challenges, like the need for global coherence in generated stories, still hamper generative models from reaching the same storytelling ability as human narrators. To tackle these challenges, many studies seek to inject structured knowledge into the generation process, which is referred to as structured knowledge-enhanced story generation. Incorporating external knowledge can enhance the logical coherence among story events, achieve better knowledge grounding, and alleviate over-generalization and repetition problems in stories. This survey provides the latest and comprehensive review of this research field: (i) we present a systematic taxonomy regarding how existing methods integrate structured knowledge into story generation; (ii) we summarize involved story corpora, structured knowledge datasets, and evaluation metrics; (iii) we give multidimensional insights into the challenges of knowledge-enhanced story generation and cast light on promising directions for future study.
- AAAILifelong Embedding Learning and Transfer for Growing Knowledge GraphsCui, Yuanning, Wang, Yuxin, Sun, Zequn, Liu, Wenqiang, Jiang, Yiqiao, Han, Kexin, and Hu, WeiIn Association for the Advancement of Artificial Intelligence – AAAI 2023
Existing knowledge graph (KG) embedding models have primarily focused on static KGs. However, real-world KGs do not remain static, but rather evolve and grow in tandem with the development of KG applications. Consequently, new facts and previously unseen entities and relations continually emerge, necessitating an embedding model that can quickly learn and transfer new knowledge through growth. Motivated by this, we delve into an expanding field of KG embedding in this paper, i.e., lifelong KG embedding. We consider knowledge transfer and retention of the learning on growing snapshots of a KG without having to learn embeddings from scratch. The proposed model includes a masked KG autoencoder for embedding learning and update, with an embedding transfer strategy to inject the learned knowledge into the new entity and relation embeddings, and an embedding regularization method to avoid catastrophic forgetting. To investigate the impacts of different aspects of KG growth, we construct four datasets to evaluate the performance of lifelong KG embedding. Experimental results show that the proposed model outperforms the state-of-the-art inductive and lifelong embedding baselines.
2022
- ISWCFacing Changes: Continual Entity Alignment for Growing Knowledge GraphsWang, Yuxin, Cui, Yuanning, Liu, Wenqiang, Sun, Zequn, Jiang, Yiqiao, Han, Kexin, and Hu, WeiIn The Semantic Web – ISWC 2022
Entity alignment is a basic and vital technique in knowledge graph (KG) integration. Over the years, research on entity alignment has resided on the assumption that KGs are static, which neglects the nature of growth of real-world KGs. As KGs grow, previous alignment results face the need to be revisited while new entity alignment waits to be discovered. In this paper, we propose and dive into a realistic yet unexplored setting, referred to as continual entity alignment. To avoid retraining an entire model on the whole KGs whenever new entities and triples come, we present a continual alignment method for this task. It reconstructs an entity’s representation based on entity adjacency, enabling it to generate embeddings for new entities quickly and inductively using their existing neighbors. It selects and replays partial pre-aligned entity pairs to train only parts of KGs while extracting trustworthy alignment for knowledge augmentation. As growing KGs inevitably contain non-matchable entities, different from previous works, the proposed method employs bidirectional nearest neighbor matching to find new entity alignment and update old alignment. Furthermore, we also construct new datasets by simulating the growth of multilingual DBpedia. Extensive experiments demonstrate that our continual alignment method is more effective than baselines based on retraining or inductive learning.
- CIKMInductive Knowledge Graph Reasoning for Multi-batch Emerging EntitiesCui, Yuanning, Wang, Yuxin, Sun, Zequn, Liu, Wenqiang, Jiang, Yiqiao, Han, Kexin, and Hu, WeiIn ACM International Conference on Information & Knowledge Management – CIKM 2022
Over the years, reasoning over knowledge graphs (KGs), which aims to infer new conclusions from known facts, has mostly focused on static KGs. Due to the unceasing growth of knowledge in real life, there raises the necessity to enable inductive reasoning ability on expanding KGs. Existing inductive work assumes that new entities all emerge once in a batch, which oversimplifies the real scenario that new entities continually appear. This study dives into a more realistic and challenging setting where new entities emerge in multiple batches. We propose a walk-based inductive reasoning model to tackle the new setting. Specifically, a graph convolutional network with adaptive relation aggregation is designed to encode and update entities using their neighboring relations. To capture the varying neighbor importance, we employ a query-aware feedback attention mechanism during the aggregation. Furthermore, to alleviate the sparse link problem of new entities, we propose a link augmentation strategy to add trustworthy facts into KGs. We construct three new datasets for simulating this multi-batch emergence scenario. The experimental results show that our proposed model outperforms state-of-the-art embedding-based, walk-based and rule-based models on inductive KG reasoning.
- TKDERevisiting Embedding-based Entity Alignment: A Robust and Adaptive MethodSun, Zequn, Hu, Wei, Wang, Chengming, Wang, Yuxin, and Qu, YuzhongIEEE Transactions on Knowledge and Data Engineering – TKDE 2022
Entity alignment – the discovery of identical entities across different knowledge graphs (KGs) – is a critical task in data fusion. In this paper, we revisit existing entity alignment methods in practical and challenging scenarios. Our empirical studies show that current work has a low level of robustness to long-tail entities and the lack of entity names or relation triples. We aim to develop a robust and adaptive entity alignment method, and the availability of relations, attributes, or names is not required. Our method consists of an attribute encoder and a relation encoder, representing an entity by aggregating its attributes or relational neighbors using the attention mechanisms that can highlight the useful attributes and relations in end-to-end learning. To let the encoders complement each other and produce a coherent representation space, we propose adaptive embedding fusion via a gating mechanism. We consider four evaluation settings, i.e., the conventional setting with both relation and attribute triples, as well as three challenging settings without attributes, without relations, without both relations and names, respectively. Results show that our method can achieve state-of-the-art performance. Even in the most challenging setting without relations and names, our method can still achieve promising results while existing methods fail.