|Rosella Machine Intelligence & Data Mining|
Home | Data Mining & Machine Learning | Products & Downloads | Site Map | Contact |
<!- menu column -->
Deep Learning for Java Applications
Developing Deep Learning / Machine Learning for Java Applications? Check CMSR Machine Learning Studio. It is based on Java!
Java Runtime Environment (JRE) / Java Developer Kit (JDK)
Java Runtime Environment (JRE) (aka, Java Virtual Machine (JVM) or Java Plug-in) and Java Developer Kit (JDK) can be obtained from the following web-sites;
JDBC drivers can be found from the following web-sites;
Installing JDBC Drivers
To install on CMSR Machine Learning Studio, select "File" - "Install JDBC Driver", from the window menus.
For more on JDBC drivers, please read JDBC Drivers.
Java and High Performance Computing
Java can be sluggish. There are two important factors that make Java slow. First one is that method calls are very heavy. Reducing method calls can make big difference. By inlining methods, performance can be improved drastically. Especially methods that are called frequently are needed to be implemented inside for better performance. Another is multi-dimensional arrays. Accessing multi-dimensional data values is very expensive. Making arrays as single dimensional arrays can improve performance significantly. Heap memory size is also reduced.
GPU: Java and OpenCL (JOCL)
Java OpenCL (JOCL) can be used to speed up computations. This can be useful for data intensive parallel operations. Deep neural network training is a good example. GPU can speed up over hundreds or thousands of times fast as a single CPU thread computations.
GPU: Java and CUDA (JCUDA)
Java CUDA (JCUDA) is very similar to JOCL. CUDA is Nvidia proprietary package. JCUDA is Java bindings of CUDA.
In general, CUDA can be more efficient than OpenCL. OpenCL requires many method calls for parameter settings. Note that method calls are very heavy operations. This is where OpenCL is less efficient than CUDA. However for jobs that involve many asynchronous enqueuing jobs, this may not be significant. For example, for deep neural network training, both take similar time.