https://riverpublishersjournal.com/index.php/JMLTAP/issue/feed
Journal of Machine Learning Theory, Applications and Practice
2023-04-06T14:57:56+00:00
Editorial Office Manager
jmltap@riverpublishers.com
Open Journal Systems
<p>The journal is devoted to providing an open access forum for machine learning research, its applications and practice. It interprets machine learning in its broadest sense covering research in areas such as artificial intelligence, computational intelligence, computer vision, deep learning,multimedia indexing, speech and natural language processing and their applications in all areas of society. The journal places a special emphasis on machine learning practice by soliciting work describing research and case studies related to deploying machine learning in different sectors of the society, and especially in the health and medicine arena. The journal will be published quarterly with every issue having one case study or a tutorial on machine learning practice.</p>
https://riverpublishersjournal.com/index.php/JMLTAP/article/view/9
A Comprehensive Review on Non-Neural Networks Collaborative Filtering Recommendation Systems
2023-02-19T12:31:56+00:00
Carmel Wenga
carmel.wenga@nzhinusoft.com
Majirus Fansi
majirus.fansi@nzhinusoft.com
Sébastien Chabrier
sebastien.chabrier@upf.pf
Jean-Martial Mari
jean-martial.mari@upf.pf
Alban Gabillon
alban.gabillon@upf.pf
<p>Over the past two decades, recommendation systems have attracted a lot of interest due to the massive rise of online applications. A particular attention has been paid to collaborative filtering, which is the most widely used in applications that involve information recommendations. Collaborative Filtering (CF) uses the known preference of a group of users to make predictions and recommendations about the unknown preferences of other users (recommendations are made based on the past behavior of users). First introduced in the 1990s, a wide variety of increasingly successful models have been proposed. Due to the success of machine learning techniques in many areas, there has been a growing emphasis on the application of such algorithms in recommendation systems. In this article, we present an overview of the CF approaches for recommendation systems, their two main categories, and their evaluation metrics. We focus on the application of classical Machine Learning algorithms to CF recommendation systems by presenting their evolution from their first use-cases to advanced Machine Learning models. We attempt to provide a comprehensive and comparative overview of CF systems (with python implementations) that can serve as a guideline for research and practice in this area.</p>
2023-02-08T00:00:00+00:00
Copyright (c) 2023
https://riverpublishersjournal.com/index.php/JMLTAP/article/view/8
NL2CMD: An Updated Workflow for Natural Language to Bash Commands Translation
2023-02-19T12:22:56+00:00
Quchen Fu
quchen.fu@vanderbilt.edu
Zhongwei Teng
zhongwei.teng@vanderbilt.edu
Marco Georgaklis
marco.georgaklis@vanderbilt.edu
Jules White
jules.white@vanderbilt.edu
Douglas C. Schmidt
d.schmidt@vanderbilt.edu
<p class="noindent">Translating natural language into Bash Commands is an emerging research field that has gained attention in recent years. Most efforts have focused on producing more accurate translation models. To the best of our knowledge, only two datasets are available, with one based on the other. Both datasets involve scraping through known data sources (through platforms like stack overflow, crowdsourcing, etc.) and hiring experts to validate and correct either the English text or Bash Commands.</p> <p class="indent">This paper provides two contributions to research on synthesizing Bash Commands from scratch. First, we describe a state-of-the-art translation model used to generate Bash Commands from the corresponding English text. Second, we introduce a new NL2CMD dataset that is automatically generated, involves minimal human intervention, and is over six times larger than prior datasets. Since the generation pipeline does not rely on existing Bash Commands, the distribution and types of commands can be custom adjusted. Our empirical results show how the scale and diversity of our dataset can offer unique opportunities for semantic parsing researchers.</p>
2023-02-08T00:00:00+00:00
Copyright (c) 2023
https://riverpublishersjournal.com/index.php/JMLTAP/article/view/268
Deep Learning Models on CPUs: A Methodology for Efficient Training
2023-04-06T14:57:56+00:00
Quchen Fu
quchen.fu@vanderbilt.edu
Ramesh Chukka
ramesh.n.chukka@intel.com
Keith Achorn
keith.achorn@intel.com
Thomas Atta-fosu
thomas.atta-fosu@intel.com
Deepak R. Canchi
deepak.r.canchi@intel.com
Zhongwei Teng
zhongwei.teng@vanderbilt.edu
Jules White
jules.white@vanderbilt.edu
Douglas C. Schmidt
d.schmidt@vanderbilt.edu
<p>GPUs have been favored for training deep learning models due to their highly parallelized architecture. As a result, most studies on training optimization focus on GPUs. There is often a trade-off, however, between cost and efficiency when deciding how to choose the proper hardware for training. In particular, CPU servers can be beneficial if training on CPUs was more efficient, as they incur fewer hardware update costs and better utilize existing infrastructure.</p> <p>This paper makes three contributions to research on training deep learning models using CPUs. First, it presents a method for optimizing the training of deep learning models on Intel CPUs and a toolkit called ProfileDNN, which we developed to improve performance profiling. Second, we describe a generic training optimization method that guides our workflow and explores several case studies where we identified performance issues and then optimized the Intel® Extension for PyTorch, resulting in an overall 2x training performance increase for the RetinaNet-ResNext50 model. Third, we show how to leverage the visualization capabilities of ProfileDNN, which enabled us to pinpoint bottlenecks and create a custom focal loss kernel that was two times faster than the official reference PyTorch implementation.</p>
2023-04-06T00:00:00+00:00
Copyright (c) 2023