The support vector machine does not need to be implemented. Linear classifiers, on the other hand, are intrinsically binary, and we have a 10-way classification issue (the library has handled it for you). You will train 10 binary, one-vs-all SVMs to determine which of ten categories a test instance belongs to. Each classifier will be taught to detect 'bird' versus 'non-bird,' 'cat' vs 'non-cat,' and so on. On each test case, all 10 classifiers are assessed, and the classifier with the highest degree of certainty "wins." For example, if the 'cat' classifier returns a score of -0.2 (where 0 is on the decision boundary), and the 'bird' classifier returns a score of -0.3, and all of the other classifiers are even more negative, the test case would be classified as a 'cat,' despite the fact that none of the classifiers put the test case on the positive side of the decision boundary. You have a free parameter C while learning an SVM that governs how strongly regularized the model is. Your accuracy will be quite sensitive to C, so experiment with a variety of numbers.   Hints: To train and predict, use SVM in Sklearn  or OpenCV.   CODE:   from sklearn import neighbors np.random.seed(56) ##########--WRITE YOUR CODE HERE--########## # The following steps are just for your reference # You can write in your own way # # # densely sample keypoints # def sample_kp(shape, stride, size): # return kp # # # extract vocabulary of SIFT features # def extract_vocabulary(raw_data, key_point): # return vocabulary # # # extract Bag of SIFT Representation of images # def extract_feat(raw_data, vocabulary, key_point): # return feat # # # sample dense keypoints # skp = sample_kp((train_data[0].shape[0],train_data[0].shape[1]),(64,64), 8) # vocabulary = extract_vocabulary(train_data, skp) # train_feat = extract_feat(train_data, vocabulary, skp) # test_feat = extract_feat(test_data, vocabulary, skp) train_feat = test_feat = ##########-------END OF CODE-------########## # this block should generate # train_feat and test_feat corresponding to train_data and test_data

Database System Concepts
7th Edition
ISBN:9780078022159
Author:Abraham Silberschatz Professor, Henry F. Korth, S. Sudarshan
Publisher:Abraham Silberschatz Professor, Henry F. Korth, S. Sudarshan
Chapter1: Introduction
Section: Chapter Questions
Problem 1PE
icon
Related questions
Question

The support vector machine does not need to be implemented. Linear classifiers, on the other hand, are intrinsically binary, and we have a 10-way classification issue (the library has handled it for you). You will train 10 binary, one-vs-all SVMs to determine which of ten categories a test instance belongs to. Each classifier will be taught to detect 'bird' versus 'non-bird,' 'cat' vs 'non-cat,' and so on.

On each test case, all 10 classifiers are assessed, and the classifier with the highest degree of certainty "wins." For example, if the 'cat' classifier returns a score of -0.2 (where 0 is on the decision boundary), and the 'bird' classifier returns a score of -0.3, and all of the other classifiers are even more negative, the test case would be classified as a 'cat,' despite the fact that none of the classifiers put the test case on the positive side of the decision boundary. You have a free parameter C while learning an SVM that governs how strongly regularized the model is. Your accuracy will be quite sensitive to C, so experiment with a variety of numbers.

 

Hints:

To train and predict, use SVM in Sklearn  or OpenCV.

 

CODE:
 
from sklearn import neighbors

np.random.seed(56)

##########--WRITE YOUR CODE HERE--##########
# The following steps are just for your reference
# You can write in your own way
#
# # densely sample keypoints
# def sample_kp(shape, stride, size):
# return kp
#
# # extract vocabulary of SIFT features
# def extract_vocabulary(raw_data, key_point):
# return vocabulary
#
# # extract Bag of SIFT Representation of images
# def extract_feat(raw_data, vocabulary, key_point):
# return feat
#
# # sample dense keypoints
# skp = sample_kp((train_data[0].shape[0],train_data[0].shape[1]),(64,64), 8)
# vocabulary = extract_vocabulary(train_data, skp)
# train_feat = extract_feat(train_data, vocabulary, skp)
# test_feat = extract_feat(test_data, vocabulary, skp)

train_feat =
test_feat =

##########-------END OF CODE-------##########
# this block should generate
# train_feat and test_feat corresponding to train_data and test_data
Expert Solution
steps

Step by step

Solved in 3 steps

Blurred answer
Knowledge Booster
Mergesort
Learn more about
Need a deep-dive on the concept behind this application? Look no further. Learn more about this topic, computer-science and related others by exploring similar questions and additional content below.
Similar questions
  • SEE MORE QUESTIONS
Recommended textbooks for you
Database System Concepts
Database System Concepts
Computer Science
ISBN:
9780078022159
Author:
Abraham Silberschatz Professor, Henry F. Korth, S. Sudarshan
Publisher:
McGraw-Hill Education
Starting Out with Python (4th Edition)
Starting Out with Python (4th Edition)
Computer Science
ISBN:
9780134444321
Author:
Tony Gaddis
Publisher:
PEARSON
Digital Fundamentals (11th Edition)
Digital Fundamentals (11th Edition)
Computer Science
ISBN:
9780132737968
Author:
Thomas L. Floyd
Publisher:
PEARSON
C How to Program (8th Edition)
C How to Program (8th Edition)
Computer Science
ISBN:
9780133976892
Author:
Paul J. Deitel, Harvey Deitel
Publisher:
PEARSON
Database Systems: Design, Implementation, & Manag…
Database Systems: Design, Implementation, & Manag…
Computer Science
ISBN:
9781337627900
Author:
Carlos Coronel, Steven Morris
Publisher:
Cengage Learning
Programmable Logic Controllers
Programmable Logic Controllers
Computer Science
ISBN:
9780073373843
Author:
Frank D. Petruzella
Publisher:
McGraw-Hill Education