Search by Title or ISSN:
 
The Editorial Staff who decide to use of this option will receive the SJIF Value within 7 days.

Register journal - (Free Service)
in a scored and prestigious database of scientific journals.
Manage journal - (Free Service)
This section allows you to place information about the journal, editors, and publisher, etc.

Full list of journals from database of SJIFactor.com. It contains currently over 24000 journals from all over the world.

A proof of being indexed in SJIF Journal Rank List.


 
Recent Trends in Parallel Computing


SJIF 2024:
Under evaluation
Area: Science
Evaluated version: online
Previous evaluation SJIF
2023: Not indexed
2022: 6.04
2021: 6.112
2020: 6.124
The journal is indexed in:
SJIFactor.com
Basic information  
Main title Recent Trends in Parallel Computing
ISSN 2393-8749 (E)
URL http://journals.stmjournals.com/rtpc
Country  India
Frequency 3 times a year
License Free for non-commercial use
Texts availability Paid
Contact Details  
Editor-in-chief Computer Prof. Pinaki Mitra
  Department of Computer Science and Engineering, Indian Institute of Technology, Guwahati, Assam
   India
Publisher STM Journals, An Imprint Of Consortium E-Learning Network Pvt. Ltd.
  A-118, 1st Floor, Sector-63, Noida, U.P. India, Pin – 201301
Journal's description  
Recent Trends in Parallel Computing (RTPC): is a print and e-journal focused towards the rapid publication of fundamental research papers on all areas of Parallel Computing.

Parallel computing is a form of computation in which many calculations can be done at the same time and it works on the principle that large problems can often be divided into smaller ones, which are then solved in parallel. Specialized parallel computer architectures are sometimes used aboard traditional processors, for quicken the specific tasks this clearly increases speed of execution of the task.

Focus and Scope

Tree, Diamond Network, Mesh, Linear Array, Star, Hypercube, Chordal ring, Cube-connected-cycles: Cayley graphs, communication networks, diameter,(d,k) graphs, parallel processing architectures, VLSI layouts, hypercubic networks, N-node hypercube,cube-connected cycles, multilayer grid model, butterfly networks, generalized hypercubes, hierarchical swapped networks, indirect swapped networks, folded hypercubes, reduced hypercubes, recursive hierarchical swapped networks,enhanced-cubes, All-to-all broadcast, Cube-connected cycle, Hypercube, Multihop routing, Wavelength division multiplexing.
ILLIAC IV, Torus, PM2I, Butterfly, Mesh-of-tree: Network-on-Chip, Mesh-of-Tree topology, Spare core, Communication cost, Integer Linear Programming, Particle Swarm Optimization, Interconnection network, MoT, DOR, NoC, Deterministic routing, Routing, Throughput, Topology, Routing protocols, IP networks, Computer architecture, System-on-a-chip, Network-on-a-chip, Scalability, Parallel processing, Network topology, Computer architecture, System-on-a-chip, System recovery, Network servers, Region 10, Computational modeling.
Pyramid, Generalized Hypercube: Feature extraction, Three-dimensional displays, Hypercubes, Training, Image color analysis, Robots, Object recognition, convolutional hypercube pyramid, RGB-D object category, instance recognition, deep learning, computer vision, RGB-D images, training data deficiency, multimodality input dissimilarity, RGB-D object recognition, point cloud data, convolutional neural network, CNN, coarse-to-fine feature representation, fusion scheme, classification, extreme learning machines, ELM, nonlinear classifiers, Interconnection networks, graph embeddings, incomplete hypercubes, shuffle-trees, pyramids, the mesh of trees.
Twisted Cube, Folded Hypercube: Hypercubes, Fault tolerance, Robustness, Multiprocessor interconnection networks, Computer science, Performance analysis, Routing, Computer architecture, Fault-tolerant systems, Multiprocessing systems, operationally enhanced folded hypercubes, performance, reliability, operation mode, fault-tolerance, twisted hypercube.
Cross-connected Cube: Cross-connected bidirectional pyramid network (CBP-Net), infrared small-dim target detection, regular constraint loss (RCL), region of interest (ROI) feature augment, Feature extraction, Object detection, Proposals, Convolution, Loss measurement Information filters, Signal to noise ratio, Product Space, Finite Automaton, Absolute Sense, Focal Condition, Spontaneous Generation.
Parallel Architectures: Shared Memory, Scalable Multi-Processors, Interconnection networks: Parallel Machine, Context Switch, Runtime System, Memory Controller, Cache Coherence, Cache Size, Memory Block, Cache Coherence, Directory Scheme, Directory Entry, Shared Memory, Interconnection Network, Total Execution Time, Page Fault, Parallel Speedup, Shared Memory, Memory Block, Error Recovery, Memory Element, Faulty Node.
Task and Data parallelism, Programming for performance: Message Passing Interface, Runtime System, Data Parallelism, Task Parallelism, High-Performance Fortran, Schedule Policy, Task Schedule, Execution Model, Runtime System, Load Imbalance, Parallel Composition, Execution Model, Communicate Sequential Process, Data Parallelism, Parallel Variable, Parallel Programming, Local View, Data Parallelism, Partial Application, Algorithmic Skeleton, Flow Solver, Task Expression, Data Parallelism, Task Program, Resource Request, Service Time, Cost Model, Input Stream, Stage Pipeline, Relative Speedup.
Multi-Core programming: Properties of Multi-Core architectures, Pthreads, OpenMP: Main Memory, Execution Model, Runtime System, Heterogeneous Architecture, Embed Memory, Multiprocessor System-on-chip (MPSoC), Platform Description, Control Data Flow Graph (CDFG), Self-timed Scheduling, Dataflow, Newton’s method, OpenMP parallel computing technology, Multithreading, Finite difference, Multi-core processors, Parallelization, Parallel computation, Parallel algorithm, Performance analysis, Reduction Operation, Likelihood Score, Programming Paradigm, Shared Memory Architecture, OpenMP Version, General Purpose Multi-cores, Heterogeneous Multi-cores, Graphical Processing Units, High-Performance Computing, Fine-grain Parallelism, Computer Architectures, Acceleration, Phylogeny, Parallel processing, Multicore processing, Computer applications, Concurrent computing, Bioinformatics, Computer architecture, Scalability, Graphics.
GPU, Accelerators: Dense Linear Algebra Solvers,GPU Accelerators,Multicore,MAGMA,Hybrid Algorithms,dense linear algebra solvers,multicore systems,GPU accelerators,graphics processing unit,hybridization techniques,Cholesky factorization,LU factorization,QR factorization,parallel programming model,optimized BLAS software,LAPACK software,architecture-specific optimization,algorithm-specific optimization,MAGMA library, Linear algebra,Multicore processing, Acceleration, Iterative algorithms, Linear accelerators, Linear systems, Equations,Computer architecture, Scientific computing, Numerical simulation, Tiles, Kernel, Graphics processing unit,Multicore processing,Libraries,Runtime,Algorithm design and analysis, Cholesky Factorization, Single Precision, Double Precision Arithmetic, Multiple GPUs, Data Tile, GPU,Power consumption,Multi-Processor Allocation,Bandwidth Utilization, optimal multiprocessor allocation algorithm,high performance GPU accelerators,power consumption,heat dissipation,high performance computing systems,cooling infrastructure, bandwidth utilization, BU, MultiProcessor requirements, MPlloc algorithm, performance degradation.
Multi-core architectures: Computer architecture, Throughput, Yarn, Parallel processing, Performance gain, Multithreading, Delay, Computer science, Milling machines, Microprocessors, single-ISA heterogeneous multicore architectures, multithreaded workload performance, chip multiprocessor,job matching, single-thread performance,thread parallelism, dynamic core assignment policies, static assignment, heterogeneous architectures, multithreading cores, comparable-area homogeneous architecture, naive policy, Bandwidth, Computer architecture, Space technology, Delay,Power system interconnection, Joining processes, Space exploration, Computer science, Design engineering, Power engineering and energy, interconnections, multicore architectures, on-chip interconnects, chip multiprocessor, interconnect architectures, multicore design,interconnect bandwidth, hierarchical bus structure, Thermal management, Multicore processing, Computer architecture,Temperature control, Parallel processing, Microprocessors, Dynamic voltage scaling,Frequency, Degradation, Multi-Core Architectures, Dynamic Thermal Management, Activity Migration, Dynamic Voltage, Frequency Scaling.
Parallel programs on multi-core machines: Parallelism and concurrency, distributed programming, heterogeneous (hybrid) systems, distributed memory systems, parallel programming, shared memory systems, Computational modeling, Parallel programming, Graphics processing unit, Message systems, Instruction sets, Multicore processing, distributed computing, distributed memory systems, parallel programming, programming environments, message passing, multiprocessing, shared memory systems, Skeleton, Libraries, Instruction sets, Computer architecture, Benchmark testing, Distributed databases, Runtime, PDES,multi-threaded, optimistic simulation,multi-core systems, optimization, Message systems, Computational modeling, Optimization, Receivers, Multicore processing, Synchronization.







 
 
Copyright © 2024 - SJIFactor.com except certain content provided by third parties.
Powered by  
 
Refund Policy | Privacy Policy | Terms of Use