Amin Banitalebi Dehkordi, PhD
(Amin Banitalebi)
Links
CODE
Prototype-level implementation of some of the algorithms used in my papers can be found at this page.  

Refer to my GitLab/GitHub page for additional academic and non-academic code: GitLab  GitHub



LBVS-HDR:  
Learning Based Visual Saliency prediction for High Dynamic Range Video

LBVS-HDR is a Learning Based Visual Saliency detection model for High Dynamic Range video. As HDR video is paving its ways through the consumer market and  
academic research community, having an efficient HDR visual attention model is essential.                                                                                                

Disclaimer: This code is free for use only for research purposes. For any commercial use please contact us at: amin [dot] banitalebi [at] gmail [dot] com
 
Please cite the following reference paper if you need to cite this code:
 
[1] A. Banitalebi-Dehkordi, Y. Dong, M. T. Pourazad, and Panos Nasiopoulos, “A Learning Based Visual Saliency Fusion Model For High Dynamic Range Video (LBVS-
HDR),” 23rd European Signal Processing Conference, EUSIPCO 2015.  camera ready.pdf - published version
 
Download:  
        code based on the implementation used in [1] can be download from this link, or at GitLab.
 
 
*** How to use the code (version 2.0):
 
1) Generate the HDR saliency features:
- This version of the software only supports HDR video sequences in the format of "*.hdr" files. Copy your "*.hdr" files in a sorted frame name order to an input  
directory.
- Navigate MATLAB to "gbvs_Dong" directory. Run the "gbvs_install.m" script first, and then open "Dong_HDR_features.m" script and set your desired output  
feature folder. Run this script and select the input HDR folder.
- This code will generate various saliency features.
 
2) Prepare the features for LBVS-HDR code:
- Run the "prepare_features_LBVS_HDR.m" script with the appropriate input/output paths. This will prepares the features in a format needed by the learning  
module.
 
3) Generate the saliency maps using LBVS-HDR:
- Run the "map_fusion_RF.m" script with the appropriate input features path. Note that since you are only using the framework in a testing mode (i.e. using the  
already generated model - the 3GB file), you won't need to set the training parameters. However, we provide the training code as well, so you can create your own  
models tailored for your own datasets.
 
Please feel free to send your questions to: amin [dot] banitalebi [at] gmail [dot] com




LBVS-3D:
Learning Based Visual Saliency prediction for Stereoscopic 3D Video

LBVS-3D is computational model of visual attention for 3D video. LBVS-3D proposes several saliency features in a learning framework, and provides the flexibility  
to incorporate those features, or any other desired features depending on the application, to predict visually salient regions in the form of saliency maps for 3D  
video frames.                                                                                                                                                                                                                            

Disclaimer: This code is free for use only for research purposes. For any commercial use please contact us at: amin [dot] banitalebi [at] gmail [dot] com
 
Please cite the following reference paper if you need to cite this code:
 
[1] A. Banitalebi-Dehkordi, M.T. Pourazad, and P. Nasiopoulos, "A Learning-Based Visual Saliency prediction model for stereoscopic 3D video (LBVS-3D),"  
Multimedia Tools and Applications, 2016, DOI 10.1007/s11042-016-4155-y.  camera ready.pdf - published version
 
Download:  
        code based on the implementation used in [1] can be download from this link, or at GitLab.




HV3D:
Human Visual System based quality metric for 3D video.

HV3D is an efficinet full-reference quality metric for quality assessment of stereoscopic 3D video.                                                                                          

Disclaimer: This code is free for use only for research purposes. For any commercial use please contact us at: amin [dot] banitalebi [at] gmail [dot] com
 
Please cite the following reference paper if you need to cite this code:
 
[1] A. Banitalebi-Dehkordi, M. T. Pourazad, and P. Nasiopoulos, "An Efficient Human Visual System Based Quality Metric for 3D Video," Springer Journal of  
Multimedia Tools and Applications, pp. 1-29, Feb. 2015, DOI: 10.1007/s11042-015-2466-z.  camera ready.pdf - published version
 
Download:  
        code based on the implementation used in [1] can be download from this link, or at GitLab.
 
 







































Copyright © 2018 by "Amin Banitalebi"  ·  All Rights reserved  ·  E-Mail: amin[dot]banitalebi[at]gmail.com