Selective Hypothesis Transfer for Lifelong Learning

Show simple item record

dc.contributor.author Benavides-Prado, Diana
dc.contributor.author Koh, Yun Sing
dc.contributor.author Riddle, Patricia
dc.coverage.spatial Budapest, HUNGARY
dc.date.accessioned 2021-09-06T22:59:14Z
dc.date.available 2021-09-06T22:59:14Z
dc.date.issued 2019-7-19
dc.identifier.isbn 9781728119854
dc.identifier.issn 2161-4393
dc.identifier.uri https://hdl.handle.net/2292/56405
dc.description.abstract Selective transfer has been proposed as an alternative for transferring fragments of knowledge. Previous work showed that transferring selectively from a group of hypotheses helps to speed learning on a target task. Similarly, existing hypotheses could benefit by selective backward transfer of recent knowledge. This setting applies to supervised machine learning systems that observe a sequence of related tasks. We propose a novel scheme for bi-directional transfer between hypotheses learned sequentially using Support Vector Machines. Transfer occurs in two directions: forward and backward. During transfer forward, a new binary classification task is to be learned. Existing knowledge is used to reinforce the importance of subspaces on the target training data that are related to source support vectors. While this target task is learned, subspaces of shared knowledge between each source hypothesis and the target hypothesis are identified. Representations of these subspaces are learned and used to refine the sources by transferring backward. Albeit fundamental, the exploration of the problem of hypothesis refinement has been very limited. We define this problem and propose a solution. Our experiments show that a learning system can gain up to 5.5 units in mean classification accuracy of tasks learned sequentially using our scheme, within 26.6% of the number of iterations when these tasks are learned from scratch.
dc.publisher IEEE
dc.relation.ispartof 2019 International Joint Conference on Neural Networks (IJCNN)
dc.relation.ispartofseries 2019 International Joint Conference on Neural Networks (IJCNN)
dc.rights Items in ResearchSpace are protected by copyright, with all rights reserved, unless otherwise indicated. Previously published items are made available in accordance with the copyright policy of the publisher.
dc.rights.uri https://researchspace.auckland.ac.nz/docs/uoa-docs/rights.htm
dc.subject Science & Technology
dc.subject Technology
dc.subject Computer Science, Artificial Intelligence
dc.subject Computer Science, Hardware & Architecture
dc.subject Engineering, Electrical & Electronic
dc.subject Computer Science
dc.subject Engineering
dc.subject Lifelong Machine Learning
dc.subject Transfer Learning
dc.subject Hypothesis Transfer Learning
dc.subject Classification
dc.subject MODEL
dc.title Selective Hypothesis Transfer for Lifelong Learning
dc.type Conference Item
dc.identifier.doi 10.1109/ijcnn.2019.8851778
pubs.begin-page 1
pubs.volume 00
dc.date.updated 2021-08-25T10:25:50Z
dc.rights.holder Copyright: The author en
pubs.author-url http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000530893800097&DestLinkType=FullRecord&DestApp=ALL_WOS&UsrCustomerID=6e41486220adb198d0efde5a3b153e7d
pubs.end-page 10
pubs.finish-date 2019-7-19
pubs.publication-status Published
pubs.start-date 2019-7-14
dc.rights.accessrights http://purl.org/eprint/accessRights/RestrictedAccess en
pubs.elements-id 784369


Files in this item

Find Full text

This item appears in the following Collection(s)

Show simple item record

Share

Search ResearchSpace


Browse

Statistics