KEYWORDS: Computer programming, Parallel computing, Operating systems, Computing systems, Instrument modeling, Field programmable gate arrays, Design and modelling, Computer hardware, Data processing, Digital signal processing
Heterogeneous Computing (HC) is a computing solution that can provide high performance and low power consumption for large-scale problems and has been successfully applied to various scenarios such as 5G communication, video processing, data mining, and edge computing. To facilitate parallel programming on multiple computing cores, the Open Computing Language (OpenCL) was created as a programming standard for heterogeneous computing systems. Unfortunately, the ReWorks operating system does not provide such a heterogeneous programming environment for developers, and thus cannot obtain the high performance offered by heterogeneous systems. This paper presents a novel OpenCL-based heterogeneous computing environment, available as a third-party library for developers using the ReWorks real-time operating system. We conducted systematic experiments to evaluate the performance of this heterogeneous computing library, and the experimental results show that programs based on the heterogeneous system exhibit superior performance compared to the original ReWorks native application.
In multi-round dialogue systems, we can easily find that the final reply is closely related to two points, one is the context of the dialogue, the other is the persona characteristics. But not all characters and contextual information will affect the final reply, because the final reply may only be related to some crucial characters and contextual information,the indiscriminate use of all information may even have a negative impact on the generated dialogue. So it is necessary to extract and utilize key characters and contextual information to improve the quality of the final generated response. In this paper, we show how to solve this problem through our new model and methods. Specifically, our new model consists of two parts: encoder and decoder. The encoder is mainly used to encode personas, contexts and historical responses, and the decoder generates corresponding words from the vocabulary. Then, the weight of the character and context is updated through the multi-head self-attention mechanism to affect the response generated by the decoder. The experimental results show that compared with the baseline models, our model and methods have improved in terms of metric-based evaluation.
Chinese Spell Check (CSC) aims to detect and correct spelling errors in Chinese text, almost all of which are related to phonetic or visual similarity. Large-scale pre-trained models (PLMs) are currently making substantial progress on the CSC task. However, when correcting errors, PLMs tend to select those words that are semantically sound or expressively fluent, sometimes ignoring pronunciation similarities. Meanwhile, the models lack knowledge of pronunciation differences. To address this problem, we propose a multi-task learning model to help enhance the CSC task. The auxiliary task is to estimate the degree of pronunciation gap between the original input and the corresponding correct text from the granularity of each word. Specifically, we use the edit distance of Pinyin to measure the degree of pronunciation discrepancy. The edit distance scheme we use is modified, due to the specificity of the Pinyin structure. Experiments on a open available benchmark dataset demonstrate the effectiveness of our strategy.
Intent recognition is the first part which needs to be accurately recognized in the conversation between ai and the user. Due to the diversity of user intent, it is difficult to manually define all categories of intent in advance during the training phase. The inability to distinguish undefined intent categories can cause the ai to keep replying to the user with seriously wrong answers, which can greatly reduce the user's sense of experience. Therefore, it is very necessary to realize the recognition of seen intention categories and the distinction of unseen intention categories. In this paper, a model with Automatic probability threshold which based on the bert model is used to ensure accurate recognition of seen intentions while distinguishing unseen intentions. Due to the inadequacy of training samples, the automatic probability threshold model in the paper uses text vectors from two bert models containing different dropout parameters as the training set, which can improve accuracy of the model.
With the development of natural language processing technologies such as deep learning and large-scale pre-trained language models, intelligent question answering robots have become more and more popular in industry. The algorithms of text classification and text matching are important for robot automatic response. However, for text classification, the number of categories is always fixed. For text matching, the sentence pairs for training are difficult to collect. In this paper, we propose a novel text matching method to solve the above two problems, which can be trained with only the text classification dataset. The proposed model can have better recognition ability for the newly added appropriate problem categories without retraining. On the basis of pre-trained model, we first conduct further pre-training with contrastive learning, and then conduct multi-task fine-tuning (core sentence vector matching and contrastive learning). The finally obtained model can benefit from both the text classification method and the text matching method.
With the development of the times, the demand for executing programs that require massively parallel computing on embedded devices has become increasingly strong. To meet this demand, various heterogeneous systems have been built on SoCs. In order to better use and manage heterogeneous systems, and to facilitate the development of heterogeneous computing programs. People will implement a heterogeneous computing framework in the operating system. Unfortunately, the reworks real-time operating system does not provide us with a heterogeneous computing framework for writing heterogeneous computing programs. We chose OpenCL as the heterogeneous computing framework for the reworks operating system and implemented the kernel state of the OpenCL heterogeneous computing framework, also known as the OpenCL driver.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.