ACM Transactions on Intelligent Systems and Technology

Material type: TextTextSeries: ; ACM Transactions on Intelligent Systems and Technology, Volume 12, Issue 5, 2021Publication details: New York : Association for Computing Machinery, 2021Description: various pagings : illustrations ; 26 cmISSN:
  • 2157-6904
Subject(s):
Contents:
Multi-Graph Cooperative Learning Towards Distant Supervised Relation Extraction -- An Attentive Survey of Attention Models -- BATS: A Spectral Biclustering Approach to Single Document Topic Modeling and Segmentation -- Collaborative Local-Global Learning for Temporal Action Proposal -- Quantized Adam with Error Feedback -- dhCM: Dynamic and Hierarchical Event Categorization and Discovery for Social Media Stream -- S3-Net: A Fast Scene Understanding Network by Single-Shot Segmentation for Autonomous Driving -- Identifying Illicit Drug Dealers on Instagram with Large-scale Multimodal Data Fusion -- Fine-Grained Semantic Image Synthesis with Object-Attention Generative Adversarial Network -- Local Graph Edge Partitioning -- Significant DBSCAN+: Statistically Robust Density-based Clustering -- "In-Network Ensemble": Deep Ensemble Learning with Diversified Knowledge Distillation -- Detecting and Analyzing Collusive Entities on YouTube -- A Comprehensive Survey of Grammatical Error Correction -- KOMPOS: Connecting Causal Knots in Large Nonlinear Time Series with Non-Parametric Regression Splines.
Summary: [Article Title: Multi-Graph Cooperative Learning Towards Distant Supervised Relation Extraction/ Changsen Yuan, Heyan Huang, and Chong Feng, p. 52:1-1:21] Abstract: The Graph Convolutional Network (GCN) is a universal relation extraction method that can predict relations of entity pairs by capturing sentences' syntactic features. However, existing GCN methods often use dependency parsing to generate graph matrices and learn syntactic features. The quality of the dependency parsing will directly affect the accuracy of the graph matrix and change the whole GCN's performance. Because of the influence of noisy words and sentence length in the distant supervised dataset, using dependency parsing on sentences causes errors and leads to unreliable information. Therefore, it is difficult to obtain credible graph matrices and relational features for some special sentences. In this article, we present a Multi-Graph Cooperative Learning model (MGCL), which focuses on extracting the reliable syntactic features of relations by different graphs and harnessing them to improve the representations of sentences. We conduct experiments on a widely used real-world dataset, and the experimental results show that our model achieves the state-of-the-art performance of relation extraction.;[Article Title: An Attentive Survey of Attention Models/ Sneha Chaudhari, Varun Mithal, Gungor Polatkan, and Rohan Ramanath, p. 53:1-1:32] Abstract: Attention Model has now become an important concept in neural networks that has been researched within diverse application domains. This survey provides a structured and comprehensive overview of the developments in modeling attention. In particular, we propose a taxonomy that groups existing techniques into coherent categories. We review salient neural architectures in which attention has been incorporated and discuss applications in which modeling attention has shown a significant impact. We also describe how attention has been used to improve the interpretability of neural networks. Finally, we discuss some future research directions in attention. We hope this survey will provide a succinct introduction to attention models and guide practitioners while developing approaches for their applications.;[Article Title: BATS: A Spectral Biclustering Approach to Single Document Topic Modeling and Segmentation/ SQiong Wu, Adam Hare, Sirui Wang, Yuwei Tu, Zhenming Liu, Christopher G. Brinton, and Yanhua Li, p. 54:1-1:29] Abstract: Existing topic modeling and text segmentation methodologies generally require large datasets for training, limiting their capabilities when only small collections of text are available. In this work, we reexamine the inter-related problems of "topic identification" and "text segmentation" for sparse document learning, when there is a single new text of interest. In developing a methodology to handle single documents, we face two major challenges. First is sparse information: with access to only one document, we cannot train traditional topic models or deep learning algorithms. Second is significant noise: a considerable portion of words in any single document will produce only noise and not help discern topics or segments. To tackle these issues, we design an unsupervised, computationally efficient methodology called Biclustering Approach to Topic modeling and Segmentation (BATS). BATS leverages three key ideas to simultaneously identify topics and segment text: (i) a new mechanism that uses word order information to reduce sample complexity, (ii) a statistically sound graph-based biclustering technique that identifies latent structures of words and sentences, and (iii) a collection of effective heuristics that remove noise words and award important words to further improve performance. Experiments on six datasets show that our approach outperforms several state-of-the-art baselines when considering topic coherence, topic diversity, segmentation, and runtime comparison metrics.;[Article Title: Collaborative Local-Global Learning for Temporal Action Proposal/ Yisheng Zhu, Hu Han, Guangcan Liu, and Qingshan Li, p. 55:1-1:14] Abstract: Temporal action proposal generation is an essential and challenging task in video understanding, which aims to locate the temporal intervals that likely contain the actions of interest. Although great progress has been made, the problem is still far from being well solved. In particular, prevalent methods can handle well only the local dependencies (i.e., short-term dependencies) among adjacent frames but are generally powerless in dealing with the global dependencies (i.e., long-term dependencies) between distant frames. To tackle this issue, we propose CLGNet, a novel Collaborative Local-Global Learning Network for temporal action proposal. The majority of CLGNet is an integration of Temporal Convolution Network and Bidirectional Long Short-Term Memory, in which Temporal Convolution Network is responsible for local dependencies while Bidirectional Long Short-Term Memory takes charge of handling the global dependencies. Furthermore, an attention mechanism called the background suppression module is designed to guide our model to focus more on the actions. Extensive experiments on two benchmark datasets, THUMOS'14 and ActivityNet-1.3, show that the proposed method can outperform state-of-the-art methods, demonstrating the strong capability of modeling the actions with varying temporal durations.;[Article Title: Quantized Adam with Error Feedback/ Congliang Chen, Li Shen, Haozhi Huang, and Wei Liu, p. 56:1-1:14] Abstract: In this article, we present a distributed variant of an adaptive stochastic gradient method for training deep neural networks in the parameter-server model. To reduce the communication cost among the workers and server, we incorporate two types of quantization schemes, i.e., gradient quantization and weight quantization, into the proposed distributed Adam. In addition, to reduce the bias introduced by quantization operations, we propose an error-feedback technique to compensate for the quantized gradient. Theoretically, in the stochastic nonconvex setting, we show that the distributed adaptive gradient method with gradient quantization and error feedback converges to the first-order stationary point, and that the distributed adaptive gradient method with weight quantization and error feedback converges to the point related to the quantized level under both the single-worker and multi-worker modes. Last, we apply the proposed distributed adaptive gradient methods to train deep neural networks. Experimental results demonstrate the efficacy of our methods.
Item type: Serials
Tags from this library: No tags from this library for this title. Log in to add tags.
Star ratings
    Average rating: 0.0 (0 votes)
Holdings
Item type Current library Home library Collection Call number Copy number Status Date due Barcode
Serials Serials National University - Manila LRC - Main Periodicals Gen. Ed. - CCIT ACM Transactions on Intelligent Systems and Technology, Volume 12, Issue 5, 2021 (Browse shelf(Opens below)) c.1 Available PER000000465
Browsing LRC - Main shelves, Shelving location: Periodicals, Collection: Gen. Ed. - CCIT Close shelf browser (Hides shelf browser)
No cover image available
No cover image available
No cover image available
No cover image available
No cover image available
No cover image available
No cover image available
ACM Transactions on Intelligent Systems and Technology, Volume 12, Issue 1, 2021 ACM Transactions on Intelligent Systems and Technology ACM Transactions on Intelligent Systems and Technology, Volume 12, Issue 3, June 2021 ACM Transactions on intelligent systems and technology / Yu Zheng, editor-in-chief ACM Transactions on Intelligent Systems and Technology, Volume 12, Issue 4, 2021 ACM Transactions on Intelligent Systems and Technology ACM Transactions on Intelligent Systems and Technology, Volume 12, Issue 5, 2021 ACM Transactions on Intelligent Systems and Technology ACM Transactions on Intelligent Systems and Technology, Volume 12, Issue 6, 2021 ACM Transactions on Intelligent Systems and Technology ACM Transactions on Intelligent Systems and Technology, Volume 13, Issue 1, 2022 ACM Transactions on Intelligent Systems and Technology ACM Transactions on Modeling and Computer Simulation, Volume 31, Issue 1, Dec 2021 ACM Transactions on Modeling and Computer Simulation

Includes bibliographical references.

Multi-Graph Cooperative Learning Towards Distant Supervised Relation Extraction -- An Attentive Survey of Attention Models -- BATS: A Spectral Biclustering Approach to Single Document Topic Modeling and Segmentation -- Collaborative Local-Global Learning for Temporal Action Proposal -- Quantized Adam with Error Feedback -- dhCM: Dynamic and Hierarchical Event Categorization and Discovery for Social Media Stream -- S3-Net: A Fast Scene Understanding Network by Single-Shot Segmentation for Autonomous Driving -- Identifying Illicit Drug Dealers on Instagram with Large-scale Multimodal Data Fusion -- Fine-Grained Semantic Image Synthesis with Object-Attention Generative Adversarial Network -- Local Graph Edge Partitioning -- Significant DBSCAN+: Statistically Robust Density-based Clustering -- "In-Network Ensemble": Deep Ensemble Learning with Diversified Knowledge Distillation -- Detecting and Analyzing Collusive Entities on YouTube -- A Comprehensive Survey of Grammatical Error Correction -- KOMPOS: Connecting Causal Knots in Large Nonlinear Time Series with Non-Parametric Regression Splines.

[Article Title: Multi-Graph Cooperative Learning Towards Distant Supervised Relation Extraction/ Changsen Yuan, Heyan Huang, and Chong Feng, p. 52:1-1:21] Abstract: The Graph Convolutional Network (GCN) is a universal relation extraction method that can predict relations of entity pairs by capturing sentences' syntactic features. However, existing GCN methods often use dependency parsing to generate graph matrices and learn syntactic features. The quality of the dependency parsing will directly affect the accuracy of the graph matrix and change the whole GCN's performance. Because of the influence of noisy words and sentence length in the distant supervised dataset, using dependency parsing on sentences causes errors and leads to unreliable information. Therefore, it is difficult to obtain credible graph matrices and relational features for some special sentences. In this article, we present a Multi-Graph Cooperative Learning model (MGCL), which focuses on extracting the reliable syntactic features of relations by different graphs and harnessing them to improve the representations of sentences. We conduct experiments on a widely used real-world dataset, and the experimental results show that our model achieves the state-of-the-art performance of relation extraction.;[Article Title: An Attentive Survey of Attention Models/ Sneha Chaudhari, Varun Mithal, Gungor Polatkan, and Rohan Ramanath, p. 53:1-1:32] Abstract: Attention Model has now become an important concept in neural networks that has been researched within diverse application domains. This survey provides a structured and comprehensive overview of the developments in modeling attention. In particular, we propose a taxonomy that groups existing techniques into coherent categories. We review salient neural architectures in which attention has been incorporated and discuss applications in which modeling attention has shown a significant impact. We also describe how attention has been used to improve the interpretability of neural networks. Finally, we discuss some future research directions in attention. We hope this survey will provide a succinct introduction to attention models and guide practitioners while developing approaches for their applications.;[Article Title: BATS: A Spectral Biclustering Approach to Single Document Topic Modeling and Segmentation/ SQiong Wu, Adam Hare, Sirui Wang, Yuwei Tu, Zhenming Liu, Christopher G. Brinton, and Yanhua Li, p. 54:1-1:29] Abstract: Existing topic modeling and text segmentation methodologies generally require large datasets for training, limiting their capabilities when only small collections of text are available. In this work, we reexamine the inter-related problems of "topic identification" and "text segmentation" for sparse document learning, when there is a single new text of interest. In developing a methodology to handle single documents, we face two major challenges. First is sparse information: with access to only one document, we cannot train traditional topic models or deep learning algorithms. Second is significant noise: a considerable portion of words in any single document will produce only noise and not help discern topics or segments. To tackle these issues, we design an unsupervised, computationally efficient methodology called Biclustering Approach to Topic modeling and Segmentation (BATS). BATS leverages three key ideas to simultaneously identify topics and segment text: (i) a new mechanism that uses word order information to reduce sample complexity, (ii) a statistically sound graph-based biclustering technique that identifies latent structures of words and sentences, and (iii) a collection of effective heuristics that remove noise words and award important words to further improve performance. Experiments on six datasets show that our approach outperforms several state-of-the-art baselines when considering topic coherence, topic diversity, segmentation, and runtime comparison metrics.;[Article Title: Collaborative Local-Global Learning for Temporal Action Proposal/ Yisheng Zhu, Hu Han, Guangcan Liu, and Qingshan Li, p. 55:1-1:14] Abstract: Temporal action proposal generation is an essential and challenging task in video understanding, which aims to locate the temporal intervals that likely contain the actions of interest. Although great progress has been made, the problem is still far from being well solved. In particular, prevalent methods can handle well only the local dependencies (i.e., short-term dependencies) among adjacent frames but are generally powerless in dealing with the global dependencies (i.e., long-term dependencies) between distant frames. To tackle this issue, we propose CLGNet, a novel Collaborative Local-Global Learning Network for temporal action proposal. The majority of CLGNet is an integration of Temporal Convolution Network and Bidirectional Long Short-Term Memory, in which Temporal Convolution Network is responsible for local dependencies while Bidirectional Long Short-Term Memory takes charge of handling the global dependencies. Furthermore, an attention mechanism called the background suppression module is designed to guide our model to focus more on the actions. Extensive experiments on two benchmark datasets, THUMOS'14 and ActivityNet-1.3, show that the proposed method can outperform state-of-the-art methods, demonstrating the strong capability of modeling the actions with varying temporal durations.;[Article Title: Quantized Adam with Error Feedback/ Congliang Chen, Li Shen, Haozhi Huang, and Wei Liu, p. 56:1-1:14] Abstract: In this article, we present a distributed variant of an adaptive stochastic gradient method for training deep neural networks in the parameter-server model. To reduce the communication cost among the workers and server, we incorporate two types of quantization schemes, i.e., gradient quantization and weight quantization, into the proposed distributed Adam. In addition, to reduce the bias introduced by quantization operations, we propose an error-feedback technique to compensate for the quantized gradient. Theoretically, in the stochastic nonconvex setting, we show that the distributed adaptive gradient method with gradient quantization and error feedback converges to the first-order stationary point, and that the distributed adaptive gradient method with weight quantization and error feedback converges to the point related to the quantized level under both the single-worker and multi-worker modes. Last, we apply the proposed distributed adaptive gradient methods to train deep neural networks. Experimental results demonstrate the efficacy of our methods.

There are no comments on this title.

to post a comment.