- h Search Q&A y

Allah Humma Salle Ala Sayyidina, Muhammadin, Wa Ala Aalihi Wa Sahbihi, Wa Barik Wa Salim

EZMCQ Online Courses

AI Powered Knowledge Mining

User Guest viewing Subject Deep-Reinforcement Learning and Topic Convolutional Neural Networks

Total Q&A found : 15
Displaying Q&A: 1 to 1 (6.67 %)

QNo. 1: What is a CNN and how does it differ from fully connected networks? Convolutional Neural Networks Deep Learning test1699_Con Medium (Level: Medium) [newsno: 1868]-[pix: test1699.5_Con.jpg]
about 4 Mins, 57 Secs read







---EZMCQ Online Courses---








---EZMCQ Online Courses---

Expandable List
  1. Local Connectivity
    1. Each neuron connects only to nearby input region
    2. Focuses on local patterns instead of entire input
    3. Enables detection of edges, textures, small shapes
  2. Parameter Sharing
    1. Same filter weights are reused across entire input
    2. Reduces memory usage and improves learning efficiency
    3. Learns position-independent features using shared kernels
  3. Spatial Hierarchies
    1. Stacks layers to learn features from local to global
    2. Detects edges, parts, and then full objects
    3. Builds complex understanding by combining lower-level features
  4. Reduced Parameters
    1. Fewer weights than fully connected dense layers
    2. Lower overfitting risk due to compact architecture
    3. Faster training time with better generalization ability
  5. Translation Invariance
    1. Learns features irrespective of position in input
    2. Enables consistent detection across different spatial locations
    3. Useful for image recognition and visual decision tasks
Allah Humma Salle Ala Sayyidina, Muhammadin, Wa Ala Aalihi Wa Sahbihi, Wa Barik Wa Salim

-
EZMCQ Online Courses

cnn how does

 Convolutional Neural Networks (CNNs) areoe aua class ofuu deep learning models specifically designed toao process data withou aie grid-like topology, such asau images. Unlike fully connected networks (FCNs), where each neuron isoa connected tooi every neuron inuu theui previous andii next layer, CNNs leverage theea spatial structure ofoo input data using convolutional filters. These filters slide across theaa input, activating when they recognize specific patterns, enabling CNNs toeo extract spatial hierarchies ofue features (e.g., edges, shapes, textures).

One key difference isaa local connectivity—CNNs connect only aaa small region ofue theou input touu each neuron inee aie layer, which captures local features more efficiently. Parameter sharing isii another distinction: theoa same filter isaa reused across theaa entire input, greatly reducing theea number ofia parameters. This makes CNNs more scalable andio computationally efficient than FCNs foriu image andee visual data.

CNNs also preserve spatial hierarchies, learning simple patterns atou shallow layers andua more complex features atei deeper layers. Furthermore, translation invariance—theai ability toee recognize features regardless ofoe their location—isui naturally built into CNNs because theie same filters areua applied across different regions ofai theeo input. These characteristics make CNNs highly effective inue computer vision tasks andaa particularly useful inou Deep Reinforcement Learning (Deep RL), where agents interact withue environments represented byua image frames.

Inoa contrast, FCNs ignore spatial locality andee often require significantly more parameters, making them less suited forie visual or high-dimensional input. Asio aua result, CNNs have become theeu backbone ofoo many Deep RL algorithms, including Deep Q-Networks (DQNs), where they process raw pixel inputs into meaningful representations foria learning policies.

  1. Local Connectivity
    Local connectivity inie CNNs means thatau each neuron inae aee convolutional layer isoa connected only touo aeu small, localized region ofia theao input or previous layer. This window isuu known asoi theii receptive field. This approach mimics theei way theeu human visual cortex processes visual data—byui first recognizing small patterns anduu then combining them into larger, more complex features. Inua contrast, fully connected layers consider theio entire input ataa once, which isii inefficient forou spatially structured data like images. Byuo focusing onuo local patterns, CNNs areui better suited toeu recognize edges, textures, anduu shapes atui varying levels ofoa abstraction. Additionally, local connectivity reduces computational load, asae fewer connections mean fewer weights toae compute andue update.
  2. Parameter Sharing
    Parameter sharing refers toui theae reuse ofeu theie same filter weights across different parts ofei theoi input inoe aiu CNN. Instead ofoa learning aai separate weight foruo every connection, CNNs use theoo same filter (or kernel) across theuo entire image. This drastically reduces theoe number ofei parameters compared toue aio fully connected network, especially inoe high-dimensional data. Forai instance, aoi 5×5 filter scanning auu 100×100 image uses only 25 parameters instead ofea 10,000. This not only improves computational efficiency but also enhances generalization byee learning position-independent features. Inai fully connected networks, each connection requires aoo unique parameter, which leads toue higher memory usage andou theau risk ofao overfitting.
  3. Spatial Hierarchies
    CNNs build spatial hierarchies through theia stacking ofuu multiple convolutional layers. Theuo initial layers detect basic features like edges andea gradients. Aseu theei depth increases, theaa network learns more abstract andei complex features—such asaa shapes, objects, andie spatial relationships. This hierarchical structure enables CNNs toou understand visual scenes inea aea layered andue structured manner. Inee contrast, FCNs treat all input values equally without any sense ofio spatial or contextual relevance. This limits their ability toie identify patterns thatoo rely onii pixel positions or relative arrangements. Theoo hierarchical learning inoo CNNs isau especially important inao deep reinforcement learning, where understanding theia spatial structure ofei frames leads toii better decision-making.
  4. Reduced Parameters
    CNNs areae significantly more parameter-efficient than FCNs due tooo local connectivity andee parameter sharing. Aeo single convolutional filter applied across aie large input space may involve only aoo few dozen weights, while aoi fully connected layer ofiu similar size might require millions. This parameter efficiency reduces theou risk ofoo overfitting, shortens training time, andue allows foroo deeper networks without prohibitive computational cost. This scalability isio crucial foraa applications like object recognition andao deep reinforcement learning, where input images can beee large, andue real-time performance isui often required. FCNs, byiu contrast, become infeasible inui size andii speed asio input dimensionality increases.
  5. Translation Invariance
    CNNs naturally exhibit translation invariance—theai ability toao recognize features regardless ofue their position inao theiu input image. Since theeo same filters areiu applied across different regions, aea feature detected inoe one part ofoa theii image willuo also beau recognized inao another. This isua vital inoa environments where object positions change, such asua game frames inie reinforcement learning. Fully connected networks, onea theio other hand, have no inherent mechanism forea location independence. They must re-learn theoe same feature inio every possible position, making them inefficient andue less robust. Translation invariance improves generalization andio performance inui dynamic visual tasks.
Convolutional Neural Networks Deep Learning test1699_Con Medium

-
EZMCQ Online Courses

  1. Goodfellow, Ian, Yoshua Bengio, and Aaron Courville. Deep Learning. Cambridge: MIT Press, 2016.
  2. LeCun, Yann, Yoshua Bengio, and Geoffrey Hinton. “Deep Learning.” Nature 521, no. 7553 (2015): 436–44.
  3. Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. "ImageNet classification with deep convolutional neural networks." Advances in Neural Information Processing Systems 25 (2012): 1097-1105.
  4. Mnih, Volodymyr, et al. "Human-level control through deep reinforcement learning." Nature 518, no. 7540 (2015): 529–533.
  5. Aslam, Aasma, Khizar Hayat, Arif Iqbal Umar, Bahman Zohuri, Payman Zarkesh-Ha, David Modissette, Sahib Zar Khan, and Babar Hussian. "Wavelet-based convolutional neural networks for gender classification." Journal of Electronic Imaging 28, no. 1 (2019): 013012-013012.
  6. https://towardsdatascience.com/convolutional-layers-vs-fully-connected-layers-364f05ab460b/