/
COMPRESSED KERNEL PERCEPTRONS COMPRESSED KERNEL PERCEPTRONS

COMPRESSED KERNEL PERCEPTRONS - PDF document

luna
luna . @luna
Follow
347 views
Uploaded On 2021-06-18

COMPRESSED KERNEL PERCEPTRONS - PPT Presentation

Slobodan Vucetic Vladimir Coric Zhuang Wang Department of Computer and Information Sciences Temple University Philadelphia PA 19122 USA t y t t 1133T where x t dimensional inp ID: 844926

kernel memory perceptron compressed memory kernel compressed perceptron support number algorithm precision vectors variables vector accurate implementation data limited

Share:

Link:

Embed:

Download Presentation from below link

Download Pdf The PPT/PDF document "COMPRESSED KERNEL PERCEPTRONS" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

1 COMPRESSED KERNEL PERCEPTRONS Slobod
COMPRESSED KERNEL PERCEPTRONS Slobodan Vucetic * Vladimir Coric Zhuang Wang Department of Computer and Information Sciences Temple University Philadelphia, PA 19122, USA * t , y t ), t = 1…T}, where x t -dimensional input vector, called an instance, and t cally, the resulting

2 kernel per ,),()(iixxKxf (1) where
kernel per ,),()(iixxKxf (1) where In Table 3 we provide information about accuracy, number of support vectors, and their bit precision for Compressed Kernel Perceptrons trained on benchmark data the last column of Table 3. It is evident that accuracies of Compressed Kernel competitive

3 to its memory unbounded counterpart. As
to its memory unbounded counterpart. As expected, the most constrained , 450 for , 800 for data set) resulted in considerably less accurate in all cases much higher than the 50% of the trivial predictor. Clearly, as the memory budget increased, the accuracy of our algorithm slowly approach

4 ed that of kernel the memory budget. Th
ed that of kernel the memory budget. This behavior is an expected consequence of the tradeoffs between number of support vectors and their precision estimated by 5 Conclusions the Compressed Kernel Perceptron algorithm that allows with extremely limited memory budgets. The algorithm i

5 s based on Random Perceptron, a simple a
s based on Random Perceptron, a simple and memory friendly online learning algorithm that keeps number of support vectors constant by removing a random support vector upon addition of a new support vector. Compressed kernel perceptron estimates its distortions due to removal of vectors and d

6 ecides on the optimal tradeoff between n
ecides on the optimal tradeoff between number of support vectors and their precision. The experimental results showed that accurate classifiers could be learned efficiently while consuming very little memory. ng lower bounds on memory needed to build an accurate classifier. They indicomple

7 xity and dimensionality. On the data dim
xity and dimensionality. On the data dimensionality side, it is possible that careful mation could lead to decrease in number of attributes and improve utilization of the available memory. What remains an open problem is whether it could be possible to perform attribute selection as part of

8 the memory-constrained online algorithm
the memory-constrained online algorithm without introducing a significant time and memory overhead. In equation (2) we mentioned ancillary variables but we did not discuss them further. In our implementation of Compressed Kernel Perceptron, we require several ancillary variables that are in

9 single precision format. These include
single precision format. These include variables to calculate kernel distance, quantization and removal loss, to determine perceptron prediction, and perform precision reduction. By observing that some of these variables can be reused for different purposes, our current implementation requi

10 res which introduces an overhead of abou
res which introduces an overhead of about 200 additional bits. In extremely memory-limited applications it would be intere There are many avenues for future research. One is implementation of Compressed Kernel Perceptron on floating and fixed point microcontrollers. Another is exploring how