Image segmentation plays a vital role in understanding and analysis images, specifically in medical imaging and radiology, image segmentation can help to quantifying diseases, measuring the structures volumes, and analyzing the organ morphology. Deep learning as a state-of-the-art method in the most image analysis tasks, has its own challenges in medical image segmentation such as dimensionality of images, different modalities, number of organs and their variation, varying labeling conditions. This dissertation makes contributions to tackle mentioned problems by 1) introducing 2.5D segmentation method with an adaptive fusion to address dimensionality problem in medical image analysis 2) proposing a generic method to handle different modalities in medical images with different number of target organs. 3) introducing a reinforcement learning method to learn CNN architecture with reasonable amount of resources. 4) proposing a method to train using shape prior from noisy and inexpert labels to learn in expert level.