Show simple item record

dc.contributor.authorYang, Shufan
dc.contributor.authorWong-Lin, KongFatt
dc.contributor.authorAndrew, James
dc.contributor.authorMak, Terrence
dc.contributor.authorMcGinnity, T. Martin
dc.date.accessioned2017-03-07T11:50:51Z
dc.date.available2017-03-07T11:50:51Z
dc.date.issued2017-01-20
dc.identifier.citationYang, S., Wong-Lin, K., Andrew, J. et al. Neural Comput & Applic (2018) 30: 2697. https://doi.org/10.1007/s00521-017-2847-5
dc.identifier.issn0941-0643
dc.identifier.doi10.1007/s00521-017-2847-5
dc.identifier.urihttp://hdl.handle.net/2436/620401
dc.description.abstractUsing programmable system-on-chip to implement computer vision functions poses many challenges due to highly constrained resources in cost, size and power consumption. In this work, we propose a new neuro-inspired image processing model and implemented it on a system-on-chip Xilinx Z702c board. With the attractor neural network model to store the object’s contour information, we eliminate the computationally expensive steps in the curve evolution re-initialisation at every new iteration or frame. Our experimental results demonstrate that this integrated approach achieves accurate and robust object tracking, when they are partially or completely occluded in the scenes. Importantly, the system is able to process 640 by 480 videos in real-time stream with 30 frames per second using only one low-power Xilinx Zynq-7000 system-on-chip platform. This proof-of-concept work has demonstrated the advantage of incorporating neuro-inspired features in solving image processing problems during occlusion.
dc.language.isoen
dc.publisherSpringer
dc.relation.urlhttp://link.springer.com/10.1007/s00521-017-2847-5
dc.subjectVisual object tracking
dc.subjectMean-shift
dc.subjectLevel set
dc.subjectAttractor neural network model
dc.subjectOcclusion
dc.subjectSystem-on-chip
dc.titleA neuro-inspired visual tracking method based on programmable system-on-chip platform
dc.typeJournal article
dc.identifier.journalNeural Computing and Applications
dc.date.accepted2017-01-10
rioxxterms.funderUniversity of Wolverhampton
rioxxterms.identifier.projectUOW070317SY
rioxxterms.versionAM
rioxxterms.licenseref.urihttps://creativecommons.org/licenses/by-nc-nd/4.0/
rioxxterms.licenseref.startdate2018-01-20
dc.source.volume30
dc.source.issue9
dc.source.beginpage2697
dc.source.endpage2708
refterms.dateFCD2018-10-19T09:28:38Z
refterms.versionFCDAM
refterms.dateFOA2018-01-20T00:00:00Z
html.description.abstractUsing programmable system-on-chip to implement computer vision functions poses many challenges due to highly constrained resources in cost, size and power consumption. In this work, we propose a new neuro-inspired image processing model and implemented it on a system-on-chip Xilinx Z702c board. With the attractor neural network model to store the object’s contour information, we eliminate the computationally expensive steps in the curve evolution re-initialisation at every new iteration or frame. Our experimental results demonstrate that this integrated approach achieves accurate and robust object tracking, when they are partially or completely occluded in the scenes. Importantly, the system is able to process 640 by 480 videos in real-time stream with 30 frames per second using only one low-power Xilinx Zynq-7000 system-on-chip platform. This proof-of-concept work has demonstrated the advantage of incorporating neuro-inspired features in solving image processing problems during occlusion.


Files in this item

Thumbnail
Name:
NCAA-D-15-01293_Final.pdf
Size:
1.237Mb
Format:
PDF

This item appears in the following Collection(s)

Show simple item record

https://creativecommons.org/licenses/by-nc-nd/4.0/
Except where otherwise noted, this item's license is described as https://creativecommons.org/licenses/by-nc-nd/4.0/