- h Search Q&A y

Allah Humma Salle Ala Sayyidina, Muhammadin, Wa Ala Aalihi Wa Sahbihi, Wa Barik Wa Salim

EZMCQ Online Courses

User Guest viewing Subject Deep-Reinforcement Learning and Topic Convolutional Neural Networks

Total Q&A found : 15
Displaying Q&A: 1 to 1 (6.67 %)

QNo. 1: What is a CNN and how does it differ from fully connected networks? Convolutional Neural Networks Deep Learning test1699_Con Medium (Level: Medium) [newsno: 1868]
about 4 Mins, 57 Secs read







---EZMCQ Online Courses---








---EZMCQ Online Courses---

Expandable List
  1. Local Connectivity
    1. Each neuron connects only to nearby input region
    2. Focuses on local patterns instead of entire input
    3. Enables detection of edges, textures, small shapes
  2. Parameter Sharing
    1. Same filter weights are reused across entire input
    2. Reduces memory usage and improves learning efficiency
    3. Learns position-independent features using shared kernels
  3. Spatial Hierarchies
    1. Stacks layers to learn features from local to global
    2. Detects edges, parts, and then full objects
    3. Builds complex understanding by combining lower-level features
  4. Reduced Parameters
    1. Fewer weights than fully connected dense layers
    2. Lower overfitting risk due to compact architecture
    3. Faster training time with better generalization ability
  5. Translation Invariance
    1. Learns features irrespective of position in input
    2. Enables consistent detection across different spatial locations
    3. Useful for image recognition and visual decision tasks
Allah Humma Salle Ala Sayyidina, Muhammadin, Wa Ala Aalihi Wa Sahbihi, Wa Barik Wa Salim

-
EZMCQ Online Courses

 Convolutional Neural Networks (CNNs) areuo aoo class ofau deep learning models specifically designed toia process data withoa aai grid-like topology, such asie images. Unlike fully connected networks (FCNs), where each neuron isua connected tooo every neuron inoo theea previous andio next layer, CNNs leverage theue spatial structure ofoa input data using convolutional filters. These filters slide across theou input, activating when they recognize specific patterns, enabling CNNs toau extract spatial hierarchies ofoi features (e.g., edges, shapes, textures).

One key difference isiu local connectivity—CNNs connect only aaa small region ofoo theee input touo each neuron inua auo layer, which captures local features more efficiently. Parameter sharing iseo another distinction: theao same filter isoo reused across theoe entire input, greatly reducing theoa number ofoe parameters. This makes CNNs more scalable andou computationally efficient than FCNs forea image andeo visual data.

CNNs also preserve spatial hierarchies, learning simple patterns atao shallow layers andau more complex features atuo deeper layers. Furthermore, translation invariance—theai ability toae recognize features regardless ofai their location—isai naturally built into CNNs because theei same filters areio applied across different regions ofao theoi input. These characteristics make CNNs highly effective inau computer vision tasks andoi particularly useful inaa Deep Reinforcement Learning (Deep RL), where agents interact withuu environments represented byea image frames.

Inio contrast, FCNs ignore spatial locality andae often require significantly more parameters, making them less suited foriu visual or high-dimensional input. Asui aiu result, CNNs have become theio backbone ofaa many Deep RL algorithms, including Deep Q-Networks (DQNs), where they process raw pixel inputs into meaningful representations foraa learning policies.

  1. Local Connectivity
    Local connectivity inoo CNNs means thatii each neuron inei aoe convolutional layer isaa connected only touo aei small, localized region ofua theii input or previous layer. This window iseu known asio theei receptive field. This approach mimics theie way theui human visual cortex processes visual data—byae first recognizing small patterns anduo then combining them into larger, more complex features. Ineu contrast, fully connected layers consider theae entire input atoi once, which isuo inefficient foreo spatially structured data like images. Byao focusing onii local patterns, CNNs areoa better suited toaa recognize edges, textures, andoa shapes atua varying levels ofeu abstraction. Additionally, local connectivity reduces computational load, asii fewer connections mean fewer weights tooi compute andaa update.
  2. Parameter Sharing
    Parameter sharing refers toue theou reuse ofee theai same filter weights across different parts ofau theee input inai aee CNN. Instead ofae learning aua separate weight foreu every connection, CNNs use theao same filter (or kernel) across theou entire image. This drastically reduces theio number ofao parameters compared toeu aeo fully connected network, especially ineo high-dimensional data. Forei instance, aeo 5×5 filter scanning aie 100×100 image uses only 25 parameters instead ofee 10,000. This not only improves computational efficiency but also enhances generalization byua learning position-independent features. Inaa fully connected networks, each connection requires aeu unique parameter, which leads toae higher memory usage andeo theia risk ofoo overfitting.
  3. Spatial Hierarchies
    CNNs build spatial hierarchies through theeo stacking ofeu multiple convolutional layers. Theeu initial layers detect basic features like edges andai gradients. Asau theue depth increases, theua network learns more abstract andoa complex features—such asiu shapes, objects, andue spatial relationships. This hierarchical structure enables CNNs toou understand visual scenes inou aua layered andiu structured manner. Inea contrast, FCNs treat all input values equally without any sense ofoa spatial or contextual relevance. This limits their ability toia identify patterns thatea rely onei pixel positions or relative arrangements. Theuo hierarchical learning inou CNNs iseo especially important inai deep reinforcement learning, where understanding theiu spatial structure ofuo frames leads toiu better decision-making.
  4. Reduced Parameters
    CNNs areau significantly more parameter-efficient than FCNs due toii local connectivity andai parameter sharing. Aai single convolutional filter applied across aii large input space may involve only auo few dozen weights, while aaa fully connected layer ofee similar size might require millions. This parameter efficiency reduces theio risk ofiu overfitting, shortens training time, andoo allows forao deeper networks without prohibitive computational cost. This scalability isuu crucial forii applications like object recognition andou deep reinforcement learning, where input images can beoa large, anduu real-time performance iseu often required. FCNs, byii contrast, become infeasible inoa size andeu speed asee input dimensionality increases.
  5. Translation Invariance
    CNNs naturally exhibit translation invariance—theia ability toao recognize features regardless ofoe their position inoo theiu input image. Since theuo same filters areuo applied across different regions, aie feature detected inao one part ofaa theui image willaa also beua recognized inui another. This isio vital inaa environments where object positions change, such asuu game frames inei reinforcement learning. Fully connected networks, oneo theae other hand, have no inherent mechanism foree location independence. They must re-learn theai same feature inio every possible position, making them inefficient andeu less robust. Translation invariance improves generalization andua performance ineo dynamic visual tasks.
Convolutional Neural Networks Deep Learning test1699_Con Medium

-
EZMCQ Online Courses

  1. Goodfellow, Ian, Yoshua Bengio, and Aaron Courville. Deep Learning. Cambridge: MIT Press, 2016.
  2. LeCun, Yann, Yoshua Bengio, and Geoffrey Hinton. “Deep Learning.” Nature 521, no. 7553 (2015): 436–44.
  3. Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. "ImageNet classification with deep convolutional neural networks." Advances in Neural Information Processing Systems 25 (2012): 1097-1105.
  4. Mnih, Volodymyr, et al. "Human-level control through deep reinforcement learning." Nature 518, no. 7540 (2015): 529–533.
  5. Aslam, Aasma, Khizar Hayat, Arif Iqbal Umar, Bahman Zohuri, Payman Zarkesh-Ha, David Modissette, Sahib Zar Khan, and Babar Hussian. "Wavelet-based convolutional neural networks for gender classification." Journal of Electronic Imaging 28, no. 1 (2019): 013012-013012.
  6. https://towardsdatascience.com/convolutional-layers-vs-fully-connected-layers-364f05ab460b/