Research in Psychology and Behavioral Sciences
ISSN (Print): 2333-4371 ISSN (Online): 2333-438X Website: https://www.sciepub.com/journal/rpbs Editor-in-chief: Apply for this position
Open Access
Journal Browser
Go
Research in Psychology and Behavioral Sciences. 2018, 6(1), 15-26
DOI: 10.12691/rpbs-6-1-3
Open AccessArticle

Random, Scattered and Asymmetric Distribution of Fixational Eye Movement - Experimental Evidence

Zhao Songnian1, Cheng Zhongbin2, Wang Fengjiao2 and Zou Qi2,

1LAPC, Institute of Atmospheric Physics, Chinese Academy of Sciences, Beijing 100029, China

2School of Computer and Information Technology, Beijing Jiaotong University, Beijing 100044, China

Pub. Date: March 28, 2018

Cite this paper:
Zhao Songnian, Cheng Zhongbin, Wang Fengjiao and Zou Qi. Random, Scattered and Asymmetric Distribution of Fixational Eye Movement - Experimental Evidence. Research in Psychology and Behavioral Sciences. 2018; 6(1):15-26. doi: 10.12691/rpbs-6-1-3

Abstract

How to determine the test chart is very important in fixational eye movement (or microsaccades) experiment, in this paper, the selected test image can provide important information about cognitive psychology and visual information processing, especially including a random dot stereogram as a test image to test the eye movement trajectory of the subjects. According to the computer vision and image feature analysis, we can predict and determine the obvious feature region in the test chart, then, compared with the eye movement trajectory of the subjects. There is a big difference between the eye movement trajectories of each participant, and the distribution of the fixation points is very random, scattered and asymmetric, which cannot be attributed to a certain statistical distribution and cannot determine their statistical parameters, and for this reason this paper suggests that the significant test for eye movement should be an interval estimation, and the specific interval estimates are given, and also points out that the microsaccade is equivalent to the conversion between the frames of visual images; the blinks of eyes are the conversion between the various primitives in visual images. These novel results are valuable for the study of cognitive psychology and vision information processing.

Keywords:
fixation point horizontal flip of image eye movement trajectory visual perception

Creative CommonsThis work is licensed under a Creative Commons Attribution 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/

Figures

Figure of 7

References:

[1]  Munoz, D.P. (2002). Commentary: Saccadic eye movements: Overview of neural circuitry. Prog Brain Res. 140. 89-96.
 
[2]  Sparks, D.L. (2002). The brainstem control of saccadic eye movements. Rev Nat Neurosci. 3, 952-964 .
 
[3]  Kowler, E. & Steinman, R. M. (1977). The role of small saccades in counting. Vision Res. 17, 141-146.
 
[4]  Kowler, E. & Steinman, R. M. (1979). Miniature saccades: eye movements that do not count. Vision Res. 19, 105-108.
 
[5]  Kowler, E. & Steinman, R. M. (1980). Small saccades serve no useful purpose: reply to a letter by R. W. Ditchburn. Vision Res. 20, 273-276.
 
[6]  Conde, S. M., Macknik, S. L. & Hubel, D. H. (2004). The role of fixational eye movements in visual perception. Nature Reviews Neuroscience. 229-240.
 
[7]  Conde, S. M., Macknik, S.L. & Hubel, D. H. (2002). The function of bursts of spikes during visual fixation in the awake primate lateral geniculate nucleus and primary visual cortex. PNAS. 99, 13920-13925.
 
[8]  Conde, S. M., Millan, J.O. & Macknik, S. L. (2013). The impact of micro- saccades on vision: Towards a unified theory of saccadic function. Nature Reviews Neuroscience. 14, 83-96.
 
[9]  Engbert, R., Mergenthaler, K., Sinn, P. & Pikovsky, A. (2011). An integrated model of fixational eye movements and microsaccades. PNAS. 108, 765-770.
 
[10]  Naoko Inaba. & Kenji Kawano. (2014). Neurons in cortical area MST remap the memory trace of visual motion across saccadic eye movements. PNAS. 111, 7825-7830.
 
[11]  Schütz, A.C., Trommershäuser, J. & Gegenfurtner, K.R. (2012). Dynamic integration of information about salience and value for saccadic eye movements. PNAS. 109, 7547-7552.
 
[12]  Segal, I.Y. et al. (2015). Decorrelation of retinal response to natural scenes by fixational eye movements. PNAS. 112, 3110-3115.
 
[13]  Chua, H. F., Boland, J. E. & Nisbett, R. E. (2005). Cultural variation in eye movements during scene perception. PNAS. 102, 12629-12633.
 
[14]  Goeke, C. et al. (2016). Cultural background shapes spatial reference frame proclivity. Scientific Reports. 5(11426), 1-13.
 
[15]  Graf, A. B. A. & Andersen, R. A. (2014). Brain–machine interface for eye movements. PNAS. 111, 17630-17635.
 
[16]  Levy, R., Bicknell, K., Slattery, T. & Rayner, K. (2008). Eye movement evidence that readers maintain and act on uncertainty about past linguistic input. PNAS. 105, 10131-10136.
 
[17]  Hanke, M. et al. (2016). A studyforrest extension, simultaneous fMRI and eye gaze recordings during prolonged natural stimulation. Scientific Report.160092, 1-15.
 
[18]  Namazi, H., Kulish, V. V. & Akrami, A. (2016). The analysis of the influence of fractal structure of stimuli on fractal dynamics in fixational eye movements and EEG signal. Scientific Reports. 6, 26639.
 
[19]  Schotter, E. R., Lee, M., Reiderman, M. & Rayner, K. (2015). The effect of contextual constraint on parafoveal processing in reading. Journal of Memory and Language. 118-139.
 
[20]  Watson, F.S., Leekam, S.R., Benson, V., Frank, M.C. & Findlay, J. (2009). Eye-movements reveal. Neuropsychologia. 47, 248-257.
 
[21]  Satoko Hisanaga., Kaoru Sekiyama.,Tomohiko Igasaki. & Nobuki Murayama. (2016). Language/ Culture Modulates Brain and Gaze Processes in Audiovisual Speech Perception. Scientific Reports. 6, 35265.
 
[22]  Zhao Songnian. et al. (2014). The representation of visual depth perception based on the plenoptic function in the retina and its neural computation in visual cortex V1. BMC Neuroscience. 15, 1-17.
 
[23]  Songnian Z., Qi Z., Zhen J., Guozheng Y. & Li Y. (2010). Neural computation of visual imaging based on Kronecker product in the primary visual cortex. BMC Neuroscience. 11, 1–14.
 
[24]  Songnian Z., Qi Z., Zhen J., GuoZheng Y. & Li Y. (2010). A computational model of early vision based on synchronized response and inner product operation. Neurocomputing. 73, 3229-3241.
 
[25]  Zhao S., Xiong X., Yao G. & Fu Z. (2003). A computational model as neurodecoder based on synchronous oscillation in the visual cortex. Neural Computation.15, 2399-2418.
 
[26]  Devore, J.L. (2015). Probability and Statistics for Engineering and the Sciences. (Ninth Edition CENGAGE Learning, Boston, MA, USA, 276-309; 361-408).
 
[27]  Ross, S. M. (2014). Introduction to Probability and Statistics for Engineers and Scientists.(Fifth Edition, Academic Press of Elsevier, MA, USA, 235-285).
 
[28]  Mona Lisa portrait is taken from http://pic.sogou.com; which is public web in China. Of course, portrait of Mona Lisa can be taken from other web, such as: www.megamonalisa.com/;www.pbs.org/treasuresoftheworld/mona_lisa/mmain.html; https://en.wikipedia.org/ wiki/Mona_Lisa.
 
[29]  Corney, D.R. & Lotto, B. (2007). What Are Lightness Illusions and Why Do We See Them? PLoS Computational Biology. 3, 1790-1800.
 
[30]  The old woman and the young girl illusion can be taken from following web: http://mathworld.wolfram.com/YoungGirl-OldWomanIllusion.html.
 
[31]  Frisby, J. P. & Stone, J. V. (2010). Seeing: The Computational Approach to Biological Vision. (Oxford University Press. London: Houghton Mifflin).
 
[32]  Perrinet, L.U. & Bednar, J. A. (2015). Edge co-occurrences can account for rapid categorization of natural versus animal images. Scientific Reports. 5, 1-7.
 
[33]  Figure 2 with permission from Li Xiaolu ; and the portrait is quoted from http://www.tupianzj.com/mingxing/xiezhen/xuqing/; it also can be taken from other web: http://weibo.com/lixiaolu
 
[34]  Cambers, D. & Reisberg, D. (1985). Can mental images be ambguous?, J. Exp. Psychol.: Human Perception and Performance. 11, 317-328.
 
[35]  Franz, V.H. & Scharnowski, F. G. (2005). Illusion effects on grasping are temporally constant not dynamic. J. Exp. Psychol.: Hum Percept Perform. 31, 1359-78.
 
[36]  Munoz, D.P. (2002). Commentary: Saccadic eye movements: Overview of neural circuitry. Prog. Brain. Res. 140,89-96.
 
[37]  Ostendorf, F., Liebermann, D. & Ploner, C.J. (2010). Human thalamus contributes to perceptual stability across eye movements. PNAS. 107, 1229-1234.
 
[38]  Bichot1, N.P., Rossi, A. F. & Desimone, R. (2005). Parallel and Serial Neural Mechanisms for Visual Search in Macaque Area V4, Science. 308, 529-534.
 
[39]  Marr, D. (1982). Vision, Computational investigation into Human representation and processing of visual information. (San Francisco: W H Freeman and Company).
 
[40]  Itti, L. (2005). Models of bottom-up attention and saliency In Neurobiology of Attention. (eds Itti,L., Rees, G. & Tsotsos.) 576-582 (Academic Press, Elsevier Inc., San Diego, CA.).
 
[41]  Buschman, T. J. & Miller, E. K. (2007). Top-down versus bottom-up control of attention in the prefrontal and posterior parietal cortices. Science, 315, 1860-1862.
 
[42]  Schall1, J. D., Paré, M. & Woodman, G. F. (2007). Comment on “Top-Down Versus Bottom-Up Control of Attention in the Prefrontal and Posterior Parietal Cortices”. Science. 318, 44.
 
[43]  Zou Qi., Zhao Songnian., Wang Zhe. & Huang Yaping. (2012). A neural computational model for bottom-up attention with invariant and overcomplete representation. BMC Neuroscience. 13, 1-22.
 
[44]  Tamami Nakano., Makoto Kato., Yusuke Morito., Seishi Itoi. & Shigeru Kitazawa. (2013). Blink- related momentary activation of the default mode network while viewing videos. PNAS. 110, 2702-706.
 
[45]  Bridgeman, B. & Palca, J. (1980). The role of microsaccades in high acuity observational tasks. Vision Res. 20, 813-817.
 
[46]  Moore, T., Tolias, A.S. & Schiller, P.H. (1998). Visual representations during saccadic eye movements. PNAS. 95, 8981-8984.
 
[47]  Moore, T. (1999). Shape Representations and Visual Guidance of Saccadic Eye Movements. Science. 285, 1914-1917.
 
[48]  McFarland, J. M., Bondy, A.G., Cumming, B.G. & Butts, D. A. (2014). High-resolution eye tracking using V1 neuron activity. Nature Communications 5, 1-12.
 
[49]  Higgins, E. & Rayner, K. (2015). Transsaccadic processing: stability, integration, and the potential role of remapping. Atten Percept Psychophys. 77, 3-27.