Neural network (overview)
Artificial neural networks are a powerful type of model capable of processing many types of data. Initially inspired by the connections between biological neural networks, modern artificial neural networks only bear slight resemblances at a high level to their biological counterparts. Nonetheless, the analogy remains conceptually useful and is reflected in some of the terminology used. Individual 'neurons' in the network receive variablyweighted input from numerous other neurons in the more superficial layers. Activation of any single neuron depends on the cumulative input of these more superficial neurons. They, in turn, connect to many deeper neurons, again with variable weightings.
There are two broad types of neural networks:
 fully connected networks
 simple kind of neural network where every neuron on one layer is connected to every neuron on the next layer
 recurrent neural networks
 neural network where part or all of the output from its previous step is used as input for its current step. This is very useful for working with a series of connected information, for example, videos.
The usefulness of neural networks stems from the fact that they are universal function approximators, meaning that given the appropriate parameters, they can represent a wide variety of interesting and dissimilar functions.
Furthermore, they are differentiable mathematical functions, that is for a given set of parameters, inputs and labels, one can find the gradient of the parameters concerning defined loss functions, in effect helping determine how these parameters should be altered in order to improve predictions.
Related Radiopaedia articles
Artificial intelligence
 artificial intelligence (AI)
 imaging data sets
 computeraided diagnosis (CAD)
 natural language processing
 machine learning (overview)
 visualizing and understanding neural networks
 common data preparation/preprocessing steps
 DICOM to bitmap conversion
 dimensionality reduction
 scaling
 centering
 normalization
 principal component analysis
 training, testing and validation datasets
 augmentation
 loss function

optimization algorithms
 ADAM
 momentum (Nesterov)
 stochastic gradient descent
 minibatch gradient descent

regularisation
 linear and quadratic
 batch normalization
 ensembling
 rulebased expert systems
 glossary
 activation function
 anomaly detection
 automation bias
 backpropagation
 batch size
 computer vision
 concept drift
 cost function
 confusion matrix
 convolution
 cross validation
 curse of dimensionality
 dice similarity coefficient
 dimensionality reduction
 epoch
 feature extraction
 gradient descent
 hyperparameters
 image registration
 imputation
 iteration
 jaccard index
 linear algebra
 noise reduction
 normalization
 R (Programming language)
 Python (Programming language)
 segmentation
 semisupervised learning
 synthetic and augmented data
 overfitting
 transfer learning