The Hong Kong University of Science and Technology

Prof. Julie Semmelhack

Prof. Semmelhack is an Associate Professor at the Division of Life Science, HKUST. Her key research interest is in the representation of visual stimulus in neurons and its subsequent analysis and transformation into motor commands. Utilising the unique transparent larval stage in zebrafish, Prof. Semmelhack utilises imaging and optogenetic techniques on intact, behaving animals for analysis of their visual circuits.

Affiliation: Division of Life Science, HKUST

Email: jsemmelhack@ust.hk

Research Interest:

A fundamental mystery in visual neuroscience is how the various key features of a visual stimulus are represented in different levels of the brain, and how it is transferred into a behavioural response. This process requires the registration of photons as a specific pattern of stimulus, extracting key features from the visual stimulus, analysing the features, and generating an appropriate behavioural response. Behavioural responses like prey capture or avoiding predators are long established behavior in animals like zebrafish. Prof. Semmelhack’s lab reported a head-fixed strike assay for researching prey capture strike, discovered the importance of visual contrast in affecting behaviour and uncovered that sweeping stimulus induces a freezing response in zebrafish.

Methodology:

Leveraging the unique transparent larval stage of zebrafish, Prof. Semmelhack’s lab utilises imaging and optogenetic techniques to analyse neuron activity in intact, behaving zebrafish. Two-photon imaging and electrically tunable lens enables recording of a large volume of individual neuron activity during stimuli or behaviour. Simultaneously, holographic 2-photon optogenetics enables selective manipulation of neurons for investigating their functions, while single cell neuroanatomy and genetic manipulation of neuron activity is also used.

Key Questions:

  1. How is the various characteristics of visual stimulus represented by different levels in the brain?

  2. How is visual stimulus read out and transformed into a motor command?

Quote from the PI:

“Placeholder quote”